00:00:00.001 Started by upstream project "autotest-per-patch" build number 126135 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.898 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.909 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.920 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.920 > git config core.sparsecheckout # timeout=10 00:00:06.932 > git read-tree -mu HEAD # timeout=10 00:00:06.949 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.967 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.967 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.085 [Pipeline] Start of Pipeline 00:00:07.107 [Pipeline] library 00:00:07.110 Loading library shm_lib@master 00:00:07.110 Library shm_lib@master is cached. Copying from home. 00:00:07.130 [Pipeline] node 00:00:22.138 Still waiting to schedule task 00:00:22.138 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.138 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP1’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP6’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘GP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM11’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘VM-host-WFP1’ is offline 00:00:22.139 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WCP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP23’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP27’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP29’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP41’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP42’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP46’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP51’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP53’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP67’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP6’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘ipxe-staging’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:22.139 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:01:33.137 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:33.139 [Pipeline] { 00:01:33.151 [Pipeline] catchError 00:01:33.152 [Pipeline] { 00:01:33.169 [Pipeline] wrap 00:01:33.181 [Pipeline] { 00:01:33.194 [Pipeline] stage 00:01:33.197 [Pipeline] { (Prologue) 00:01:33.223 [Pipeline] echo 00:01:33.224 Node: VM-host-SM9 00:01:33.232 [Pipeline] cleanWs 00:01:33.242 [WS-CLEANUP] Deleting project workspace... 00:01:33.242 [WS-CLEANUP] Deferred wipeout is used... 00:01:33.247 [WS-CLEANUP] done 00:01:33.402 [Pipeline] setCustomBuildProperty 00:01:33.486 [Pipeline] httpRequest 00:01:33.503 [Pipeline] echo 00:01:33.504 Sorcerer 10.211.164.101 is alive 00:01:33.511 [Pipeline] httpRequest 00:01:33.515 HttpMethod: GET 00:01:33.515 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:01:33.516 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:01:33.517 Response Code: HTTP/1.1 200 OK 00:01:33.517 Success: Status code 200 is in the accepted range: 200,404 00:01:33.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:01:33.661 [Pipeline] sh 00:01:33.939 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:01:33.953 [Pipeline] httpRequest 00:01:33.970 [Pipeline] echo 00:01:33.971 Sorcerer 10.211.164.101 is alive 00:01:33.979 [Pipeline] httpRequest 00:01:33.984 HttpMethod: GET 00:01:33.985 URL: http://10.211.164.101/packages/spdk_7d88ad9b8362959e45e4e2dfcc70f1bdea178c62.tar.gz 00:01:33.985 Sending request to url: http://10.211.164.101/packages/spdk_7d88ad9b8362959e45e4e2dfcc70f1bdea178c62.tar.gz 00:01:33.986 Response Code: HTTP/1.1 200 OK 00:01:33.987 Success: Status code 200 is in the accepted range: 200,404 00:01:33.987 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk_7d88ad9b8362959e45e4e2dfcc70f1bdea178c62.tar.gz 00:01:36.158 [Pipeline] sh 00:01:36.436 + tar --no-same-owner -xf spdk_7d88ad9b8362959e45e4e2dfcc70f1bdea178c62.tar.gz 00:01:39.756 [Pipeline] sh 00:01:40.034 + git -C spdk log --oneline -n5 00:01:40.034 7d88ad9b8 bdevperf: allocate data buffers based on bdev's socket id 00:01:40.034 9cfa1d5f6 bdev/nvme: populate socket_id 00:01:40.034 4a45fec0d bdev: add socket_id to spdk_bdev 00:01:40.034 e8fe15377 fio/nvme: use socket_id when allocating io buffers 00:01:40.034 25161080d spdk_nvme_perf: allocate buffers from socket_id reported by ctrlr 00:01:40.052 [Pipeline] writeFile 00:01:40.070 [Pipeline] sh 00:01:40.351 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:40.361 [Pipeline] sh 00:01:40.633 + cat autorun-spdk.conf 00:01:40.633 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.633 SPDK_TEST_NVMF=1 00:01:40.633 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.633 SPDK_TEST_USDT=1 00:01:40.633 SPDK_TEST_NVMF_MDNS=1 00:01:40.633 SPDK_RUN_UBSAN=1 00:01:40.633 NET_TYPE=virt 00:01:40.633 SPDK_JSONRPC_GO_CLIENT=1 00:01:40.633 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.639 RUN_NIGHTLY=0 00:01:40.640 [Pipeline] } 00:01:40.655 [Pipeline] // stage 00:01:40.671 [Pipeline] stage 00:01:40.673 [Pipeline] { (Run VM) 00:01:40.694 [Pipeline] sh 00:01:40.975 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.975 + echo 'Start stage prepare_nvme.sh' 00:01:40.975 Start stage prepare_nvme.sh 00:01:40.975 + [[ -n 3 ]] 00:01:40.975 + disk_prefix=ex3 00:01:40.975 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 ]] 00:01:40.975 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf ]] 00:01:40.975 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf 00:01:40.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.975 ++ SPDK_TEST_NVMF=1 00:01:40.975 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.975 ++ SPDK_TEST_USDT=1 00:01:40.975 ++ SPDK_TEST_NVMF_MDNS=1 00:01:40.975 ++ SPDK_RUN_UBSAN=1 00:01:40.975 ++ NET_TYPE=virt 00:01:40.975 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:40.975 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.975 ++ RUN_NIGHTLY=0 00:01:40.975 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:40.975 + nvme_files=() 00:01:40.975 + declare -A nvme_files 00:01:40.975 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.975 + nvme_files['nvme.img']=5G 00:01:40.975 + nvme_files['nvme-cmb.img']=5G 00:01:40.975 + nvme_files['nvme-multi0.img']=4G 00:01:40.975 + nvme_files['nvme-multi1.img']=4G 00:01:40.975 + nvme_files['nvme-multi2.img']=4G 00:01:40.975 + nvme_files['nvme-openstack.img']=8G 00:01:40.975 + nvme_files['nvme-zns.img']=5G 00:01:40.975 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.975 + (( SPDK_TEST_FTL == 1 )) 00:01:40.975 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.975 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.975 + for nvme in "${!nvme_files[@]}" 00:01:40.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:40.975 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.975 + for nvme in "${!nvme_files[@]}" 00:01:40.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:40.976 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.976 + for nvme in "${!nvme_files[@]}" 00:01:40.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:41.233 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:41.233 + for nvme in "${!nvme_files[@]}" 00:01:41.233 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:41.233 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.233 + for nvme in "${!nvme_files[@]}" 00:01:41.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:41.234 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.234 + for nvme in "${!nvme_files[@]}" 00:01:41.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:41.492 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.492 + for nvme in "${!nvme_files[@]}" 00:01:41.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:41.492 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.492 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:41.492 + echo 'End stage prepare_nvme.sh' 00:01:41.492 End stage prepare_nvme.sh 00:01:41.506 [Pipeline] sh 00:01:41.791 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:41.791 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:41.791 00:01:41.791 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant 00:01:41.791 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk 00:01:41.791 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:41.791 HELP=0 00:01:41.791 DRY_RUN=0 00:01:41.791 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:41.791 NVME_DISKS_TYPE=nvme,nvme, 00:01:41.791 NVME_AUTO_CREATE=0 00:01:41.791 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:41.791 NVME_CMB=,, 00:01:41.791 NVME_PMR=,, 00:01:41.791 NVME_ZNS=,, 00:01:41.791 NVME_MS=,, 00:01:41.791 NVME_FDP=,, 00:01:41.791 SPDK_VAGRANT_DISTRO=fedora38 00:01:41.791 SPDK_VAGRANT_VMCPU=10 00:01:41.791 SPDK_VAGRANT_VMRAM=12288 00:01:41.791 SPDK_VAGRANT_PROVIDER=libvirt 00:01:41.791 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:41.791 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:41.791 SPDK_OPENSTACK_NETWORK=0 00:01:41.791 VAGRANT_PACKAGE_BOX=0 00:01:41.791 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:41.792 FORCE_DISTRO=true 00:01:41.792 VAGRANT_BOX_VERSION= 00:01:41.792 EXTRA_VAGRANTFILES= 00:01:41.792 NIC_MODEL=e1000 00:01:41.792 00:01:41.792 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt' 00:01:41.792 /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:45.972 Bringing machine 'default' up with 'libvirt' provider... 00:01:46.541 ==> default: Creating image (snapshot of base box volume). 00:01:46.541 ==> default: Creating domain with the following settings... 00:01:46.541 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720795284_284439d5d3066a73e84d 00:01:46.541 ==> default: -- Domain type: kvm 00:01:46.541 ==> default: -- Cpus: 10 00:01:46.541 ==> default: -- Feature: acpi 00:01:46.541 ==> default: -- Feature: apic 00:01:46.541 ==> default: -- Feature: pae 00:01:46.541 ==> default: -- Memory: 12288M 00:01:46.541 ==> default: -- Memory Backing: hugepages: 00:01:46.541 ==> default: -- Management MAC: 00:01:46.541 ==> default: -- Loader: 00:01:46.541 ==> default: -- Nvram: 00:01:46.541 ==> default: -- Base box: spdk/fedora38 00:01:46.541 ==> default: -- Storage pool: default 00:01:46.541 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720795284_284439d5d3066a73e84d.img (20G) 00:01:46.541 ==> default: -- Volume Cache: default 00:01:46.541 ==> default: -- Kernel: 00:01:46.541 ==> default: -- Initrd: 00:01:46.541 ==> default: -- Graphics Type: vnc 00:01:46.541 ==> default: -- Graphics Port: -1 00:01:46.541 ==> default: -- Graphics IP: 127.0.0.1 00:01:46.541 ==> default: -- Graphics Password: Not defined 00:01:46.541 ==> default: -- Video Type: cirrus 00:01:46.541 ==> default: -- Video VRAM: 9216 00:01:46.541 ==> default: -- Sound Type: 00:01:46.541 ==> default: -- Keymap: en-us 00:01:46.541 ==> default: -- TPM Path: 00:01:46.541 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:46.541 ==> default: -- Command line args: 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:46.541 ==> default: -> value=-drive, 00:01:46.541 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:46.541 ==> default: -> value=-drive, 00:01:46.541 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.541 ==> default: -> value=-drive, 00:01:46.541 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.541 ==> default: -> value=-drive, 00:01:46.541 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:46.541 ==> default: -> value=-device, 00:01:46.541 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.802 ==> default: Creating shared folders metadata... 00:01:46.802 ==> default: Starting domain. 00:01:48.181 ==> default: Waiting for domain to get an IP address... 00:02:06.262 ==> default: Waiting for SSH to become available... 00:02:06.262 ==> default: Configuring and enabling network interfaces... 00:02:09.542 default: SSH address: 192.168.121.142:22 00:02:09.542 default: SSH username: vagrant 00:02:09.542 default: SSH auth method: private key 00:02:11.439 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:19.537 ==> default: Mounting SSHFS shared folder... 00:02:20.469 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:20.469 ==> default: Checking Mount.. 00:02:21.405 ==> default: Folder Successfully Mounted! 00:02:21.405 ==> default: Running provisioner: file... 00:02:21.991 default: ~/.gitconfig => .gitconfig 00:02:22.248 00:02:22.248 SUCCESS! 00:02:22.248 00:02:22.248 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:02:22.248 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:22.248 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:02:22.248 00:02:22.258 [Pipeline] } 00:02:22.279 [Pipeline] // stage 00:02:22.308 [Pipeline] dir 00:02:22.309 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt 00:02:22.310 [Pipeline] { 00:02:22.322 [Pipeline] catchError 00:02:22.323 [Pipeline] { 00:02:22.334 [Pipeline] sh 00:02:22.611 + vagrant ssh-config --host vagrant 00:02:22.611 + sed -ne /^Host/,$p 00:02:22.612 + tee ssh_conf 00:02:26.784 Host vagrant 00:02:26.784 HostName 192.168.121.142 00:02:26.784 User vagrant 00:02:26.784 Port 22 00:02:26.784 UserKnownHostsFile /dev/null 00:02:26.784 StrictHostKeyChecking no 00:02:26.784 PasswordAuthentication no 00:02:26.784 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:26.784 IdentitiesOnly yes 00:02:26.784 LogLevel FATAL 00:02:26.784 ForwardAgent yes 00:02:26.784 ForwardX11 yes 00:02:26.784 00:02:26.797 [Pipeline] withEnv 00:02:26.800 [Pipeline] { 00:02:26.816 [Pipeline] sh 00:02:27.091 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:27.091 source /etc/os-release 00:02:27.091 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.091 # Minimal, systemd-like check. 00:02:27.091 if [[ -e /.dockerenv ]]; then 00:02:27.091 # Clear garbage from the node's name: 00:02:27.091 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.091 # $HOSTNAME is the actual container id 00:02:27.091 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.091 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:27.091 # We can assume this is a mount from a host where container is running, 00:02:27.091 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.091 container="$(< /etc/hostname) ($agent)" 00:02:27.091 else 00:02:27.091 # Fallback 00:02:27.091 container=$agent 00:02:27.091 fi 00:02:27.091 fi 00:02:27.091 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.091 00:02:27.101 [Pipeline] } 00:02:27.122 [Pipeline] // withEnv 00:02:27.130 [Pipeline] setCustomBuildProperty 00:02:27.146 [Pipeline] stage 00:02:27.148 [Pipeline] { (Tests) 00:02:27.168 [Pipeline] sh 00:02:27.446 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:27.459 [Pipeline] sh 00:02:27.735 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:27.750 [Pipeline] timeout 00:02:27.750 Timeout set to expire in 40 min 00:02:27.752 [Pipeline] { 00:02:27.769 [Pipeline] sh 00:02:28.064 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:28.667 HEAD is now at 7d88ad9b8 bdevperf: allocate data buffers based on bdev's socket id 00:02:28.679 [Pipeline] sh 00:02:28.956 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:29.226 [Pipeline] sh 00:02:29.506 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:29.523 [Pipeline] sh 00:02:29.800 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:29.800 ++ readlink -f spdk_repo 00:02:29.800 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:29.800 + [[ -n /home/vagrant/spdk_repo ]] 00:02:29.800 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:29.800 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:29.800 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:29.800 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:29.800 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:29.800 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:29.800 + cd /home/vagrant/spdk_repo 00:02:29.800 + source /etc/os-release 00:02:29.800 ++ NAME='Fedora Linux' 00:02:29.800 ++ VERSION='38 (Cloud Edition)' 00:02:29.800 ++ ID=fedora 00:02:29.800 ++ VERSION_ID=38 00:02:29.800 ++ VERSION_CODENAME= 00:02:29.800 ++ PLATFORM_ID=platform:f38 00:02:29.800 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:29.800 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:29.800 ++ LOGO=fedora-logo-icon 00:02:29.800 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:29.800 ++ HOME_URL=https://fedoraproject.org/ 00:02:29.800 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:29.800 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:29.800 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:29.800 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:29.800 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:29.800 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:29.800 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:29.800 ++ SUPPORT_END=2024-05-14 00:02:29.800 ++ VARIANT='Cloud Edition' 00:02:29.800 ++ VARIANT_ID=cloud 00:02:29.800 + uname -a 00:02:29.800 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:29.800 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:30.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:30.366 Hugepages 00:02:30.366 node hugesize free / total 00:02:30.366 node0 1048576kB 0 / 0 00:02:30.366 node0 2048kB 0 / 0 00:02:30.366 00:02:30.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.366 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:30.366 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:30.366 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:30.366 + rm -f /tmp/spdk-ld-path 00:02:30.366 + source autorun-spdk.conf 00:02:30.366 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.366 ++ SPDK_TEST_NVMF=1 00:02:30.366 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.366 ++ SPDK_TEST_USDT=1 00:02:30.366 ++ SPDK_TEST_NVMF_MDNS=1 00:02:30.366 ++ SPDK_RUN_UBSAN=1 00:02:30.366 ++ NET_TYPE=virt 00:02:30.366 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:30.366 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.366 ++ RUN_NIGHTLY=0 00:02:30.366 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:30.366 + [[ -n '' ]] 00:02:30.366 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:30.366 + for M in /var/spdk/build-*-manifest.txt 00:02:30.366 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:30.366 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.366 + for M in /var/spdk/build-*-manifest.txt 00:02:30.366 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:30.366 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.366 ++ uname 00:02:30.366 + [[ Linux == \L\i\n\u\x ]] 00:02:30.366 + sudo dmesg -T 00:02:30.366 + sudo dmesg --clear 00:02:30.623 + dmesg_pid=5154 00:02:30.623 + sudo dmesg -Tw 00:02:30.623 + [[ Fedora Linux == FreeBSD ]] 00:02:30.623 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.623 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.623 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:30.623 + [[ -x /usr/src/fio-static/fio ]] 00:02:30.623 + export FIO_BIN=/usr/src/fio-static/fio 00:02:30.623 + FIO_BIN=/usr/src/fio-static/fio 00:02:30.623 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:30.623 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:30.623 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:30.623 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.623 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.623 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:30.623 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.623 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.623 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:30.623 Test configuration: 00:02:30.623 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.623 SPDK_TEST_NVMF=1 00:02:30.623 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.623 SPDK_TEST_USDT=1 00:02:30.623 SPDK_TEST_NVMF_MDNS=1 00:02:30.623 SPDK_RUN_UBSAN=1 00:02:30.623 NET_TYPE=virt 00:02:30.623 SPDK_JSONRPC_GO_CLIENT=1 00:02:30.623 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.623 RUN_NIGHTLY=0 14:42:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:30.623 14:42:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:30.623 14:42:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.623 14:42:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.623 14:42:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.623 14:42:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.623 14:42:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.623 14:42:09 -- paths/export.sh@5 -- $ export PATH 00:02:30.623 14:42:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.624 14:42:09 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:30.624 14:42:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:30.624 14:42:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720795329.XXXXXX 00:02:30.624 14:42:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720795329.LFUB8B 00:02:30.624 14:42:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:30.624 14:42:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:30.624 14:42:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:30.624 14:42:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:30.624 14:42:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:30.624 14:42:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:30.624 14:42:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:30.624 14:42:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.624 14:42:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:30.624 14:42:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:30.624 14:42:09 -- pm/common@17 -- $ local monitor 00:02:30.624 14:42:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.624 14:42:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.624 14:42:09 -- pm/common@25 -- $ sleep 1 00:02:30.624 14:42:09 -- pm/common@21 -- $ date +%s 00:02:30.624 14:42:09 -- pm/common@21 -- $ date +%s 00:02:30.624 14:42:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720795329 00:02:30.624 14:42:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720795329 00:02:30.624 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720795329_collect-vmstat.pm.log 00:02:30.624 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720795329_collect-cpu-load.pm.log 00:02:31.556 14:42:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:31.557 14:42:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.557 14:42:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.557 14:42:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.557 14:42:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.557 Fri Jul 12 02:42:10 PM UTC 2024 00:02:31.557 14:42:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.557 v24.09-pre-232-g7d88ad9b8 00:02:31.557 14:42:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:31.557 14:42:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.557 14:42:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.557 14:42:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:31.557 14:42:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:31.557 14:42:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.557 ************************************ 00:02:31.557 START TEST ubsan 00:02:31.557 ************************************ 00:02:31.557 using ubsan 00:02:31.557 14:42:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:31.557 00:02:31.557 real 0m0.000s 00:02:31.557 user 0m0.000s 00:02:31.557 sys 0m0.000s 00:02:31.557 14:42:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:31.557 14:42:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.557 ************************************ 00:02:31.557 END TEST ubsan 00:02:31.557 ************************************ 00:02:31.557 14:42:10 -- common/autotest_common.sh@1142 -- $ return 0 00:02:31.557 14:42:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:31.557 14:42:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:31.557 14:42:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:31.557 14:42:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:31.815 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:31.815 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:32.072 Using 'verbs' RDMA provider 00:02:45.220 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:57.414 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:57.414 go version go1.21.1 linux/amd64 00:02:57.414 Creating mk/config.mk...done. 00:02:57.414 Creating mk/cc.flags.mk...done. 00:02:57.414 Type 'make' to build. 00:02:57.414 14:42:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:57.414 14:42:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:57.414 14:42:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:57.414 14:42:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.414 ************************************ 00:02:57.414 START TEST make 00:02:57.414 ************************************ 00:02:57.414 14:42:35 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:57.414 make[1]: Nothing to be done for 'all'. 00:03:15.485 The Meson build system 00:03:15.485 Version: 1.3.1 00:03:15.485 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:15.485 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:15.485 Build type: native build 00:03:15.485 Program cat found: YES (/usr/bin/cat) 00:03:15.485 Project name: DPDK 00:03:15.485 Project version: 24.03.0 00:03:15.485 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:15.485 C linker for the host machine: cc ld.bfd 2.39-16 00:03:15.485 Host machine cpu family: x86_64 00:03:15.485 Host machine cpu: x86_64 00:03:15.485 Message: ## Building in Developer Mode ## 00:03:15.485 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:15.485 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:15.485 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:15.485 Program python3 found: YES (/usr/bin/python3) 00:03:15.485 Program cat found: YES (/usr/bin/cat) 00:03:15.485 Compiler for C supports arguments -march=native: YES 00:03:15.485 Checking for size of "void *" : 8 00:03:15.485 Checking for size of "void *" : 8 (cached) 00:03:15.485 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:15.485 Library m found: YES 00:03:15.485 Library numa found: YES 00:03:15.485 Has header "numaif.h" : YES 00:03:15.485 Library fdt found: NO 00:03:15.485 Library execinfo found: NO 00:03:15.485 Has header "execinfo.h" : YES 00:03:15.485 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:15.485 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:15.485 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:15.485 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:15.485 Run-time dependency openssl found: YES 3.0.9 00:03:15.485 Run-time dependency libpcap found: YES 1.10.4 00:03:15.485 Has header "pcap.h" with dependency libpcap: YES 00:03:15.485 Compiler for C supports arguments -Wcast-qual: YES 00:03:15.485 Compiler for C supports arguments -Wdeprecated: YES 00:03:15.485 Compiler for C supports arguments -Wformat: YES 00:03:15.485 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:15.485 Compiler for C supports arguments -Wformat-security: NO 00:03:15.485 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:15.485 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:15.485 Compiler for C supports arguments -Wnested-externs: YES 00:03:15.485 Compiler for C supports arguments -Wold-style-definition: YES 00:03:15.485 Compiler for C supports arguments -Wpointer-arith: YES 00:03:15.485 Compiler for C supports arguments -Wsign-compare: YES 00:03:15.485 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:15.485 Compiler for C supports arguments -Wundef: YES 00:03:15.485 Compiler for C supports arguments -Wwrite-strings: YES 00:03:15.485 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:15.485 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:15.485 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:15.485 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:15.485 Program objdump found: YES (/usr/bin/objdump) 00:03:15.485 Compiler for C supports arguments -mavx512f: YES 00:03:15.485 Checking if "AVX512 checking" compiles: YES 00:03:15.486 Fetching value of define "__SSE4_2__" : 1 00:03:15.486 Fetching value of define "__AES__" : 1 00:03:15.486 Fetching value of define "__AVX__" : 1 00:03:15.486 Fetching value of define "__AVX2__" : 1 00:03:15.486 Fetching value of define "__AVX512BW__" : (undefined) 00:03:15.486 Fetching value of define "__AVX512CD__" : (undefined) 00:03:15.486 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:15.486 Fetching value of define "__AVX512F__" : (undefined) 00:03:15.486 Fetching value of define "__AVX512VL__" : (undefined) 00:03:15.486 Fetching value of define "__PCLMUL__" : 1 00:03:15.486 Fetching value of define "__RDRND__" : 1 00:03:15.486 Fetching value of define "__RDSEED__" : 1 00:03:15.486 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:15.486 Fetching value of define "__znver1__" : (undefined) 00:03:15.486 Fetching value of define "__znver2__" : (undefined) 00:03:15.486 Fetching value of define "__znver3__" : (undefined) 00:03:15.486 Fetching value of define "__znver4__" : (undefined) 00:03:15.486 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:15.486 Message: lib/log: Defining dependency "log" 00:03:15.486 Message: lib/kvargs: Defining dependency "kvargs" 00:03:15.486 Message: lib/telemetry: Defining dependency "telemetry" 00:03:15.486 Checking for function "getentropy" : NO 00:03:15.486 Message: lib/eal: Defining dependency "eal" 00:03:15.486 Message: lib/ring: Defining dependency "ring" 00:03:15.486 Message: lib/rcu: Defining dependency "rcu" 00:03:15.486 Message: lib/mempool: Defining dependency "mempool" 00:03:15.486 Message: lib/mbuf: Defining dependency "mbuf" 00:03:15.486 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:15.486 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:15.486 Compiler for C supports arguments -mpclmul: YES 00:03:15.486 Compiler for C supports arguments -maes: YES 00:03:15.486 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:15.486 Compiler for C supports arguments -mavx512bw: YES 00:03:15.486 Compiler for C supports arguments -mavx512dq: YES 00:03:15.486 Compiler for C supports arguments -mavx512vl: YES 00:03:15.486 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:15.486 Compiler for C supports arguments -mavx2: YES 00:03:15.486 Compiler for C supports arguments -mavx: YES 00:03:15.486 Message: lib/net: Defining dependency "net" 00:03:15.486 Message: lib/meter: Defining dependency "meter" 00:03:15.486 Message: lib/ethdev: Defining dependency "ethdev" 00:03:15.486 Message: lib/pci: Defining dependency "pci" 00:03:15.486 Message: lib/cmdline: Defining dependency "cmdline" 00:03:15.486 Message: lib/hash: Defining dependency "hash" 00:03:15.486 Message: lib/timer: Defining dependency "timer" 00:03:15.486 Message: lib/compressdev: Defining dependency "compressdev" 00:03:15.486 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:15.486 Message: lib/dmadev: Defining dependency "dmadev" 00:03:15.486 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:15.486 Message: lib/power: Defining dependency "power" 00:03:15.486 Message: lib/reorder: Defining dependency "reorder" 00:03:15.486 Message: lib/security: Defining dependency "security" 00:03:15.486 Has header "linux/userfaultfd.h" : YES 00:03:15.486 Has header "linux/vduse.h" : YES 00:03:15.486 Message: lib/vhost: Defining dependency "vhost" 00:03:15.486 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:15.486 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:15.486 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:15.486 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:15.486 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:15.486 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:15.486 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:15.486 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:15.486 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:15.486 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:15.486 Program doxygen found: YES (/usr/bin/doxygen) 00:03:15.486 Configuring doxy-api-html.conf using configuration 00:03:15.486 Configuring doxy-api-man.conf using configuration 00:03:15.486 Program mandb found: YES (/usr/bin/mandb) 00:03:15.486 Program sphinx-build found: NO 00:03:15.486 Configuring rte_build_config.h using configuration 00:03:15.486 Message: 00:03:15.486 ================= 00:03:15.486 Applications Enabled 00:03:15.486 ================= 00:03:15.486 00:03:15.486 apps: 00:03:15.486 00:03:15.486 00:03:15.486 Message: 00:03:15.486 ================= 00:03:15.486 Libraries Enabled 00:03:15.486 ================= 00:03:15.486 00:03:15.486 libs: 00:03:15.486 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:15.486 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:15.486 cryptodev, dmadev, power, reorder, security, vhost, 00:03:15.486 00:03:15.486 Message: 00:03:15.486 =============== 00:03:15.486 Drivers Enabled 00:03:15.486 =============== 00:03:15.486 00:03:15.486 common: 00:03:15.486 00:03:15.486 bus: 00:03:15.486 pci, vdev, 00:03:15.486 mempool: 00:03:15.486 ring, 00:03:15.486 dma: 00:03:15.486 00:03:15.486 net: 00:03:15.486 00:03:15.486 crypto: 00:03:15.486 00:03:15.486 compress: 00:03:15.486 00:03:15.486 vdpa: 00:03:15.486 00:03:15.486 00:03:15.486 Message: 00:03:15.486 ================= 00:03:15.486 Content Skipped 00:03:15.486 ================= 00:03:15.486 00:03:15.486 apps: 00:03:15.486 dumpcap: explicitly disabled via build config 00:03:15.486 graph: explicitly disabled via build config 00:03:15.486 pdump: explicitly disabled via build config 00:03:15.486 proc-info: explicitly disabled via build config 00:03:15.486 test-acl: explicitly disabled via build config 00:03:15.486 test-bbdev: explicitly disabled via build config 00:03:15.486 test-cmdline: explicitly disabled via build config 00:03:15.486 test-compress-perf: explicitly disabled via build config 00:03:15.486 test-crypto-perf: explicitly disabled via build config 00:03:15.486 test-dma-perf: explicitly disabled via build config 00:03:15.486 test-eventdev: explicitly disabled via build config 00:03:15.486 test-fib: explicitly disabled via build config 00:03:15.486 test-flow-perf: explicitly disabled via build config 00:03:15.486 test-gpudev: explicitly disabled via build config 00:03:15.486 test-mldev: explicitly disabled via build config 00:03:15.486 test-pipeline: explicitly disabled via build config 00:03:15.486 test-pmd: explicitly disabled via build config 00:03:15.486 test-regex: explicitly disabled via build config 00:03:15.486 test-sad: explicitly disabled via build config 00:03:15.486 test-security-perf: explicitly disabled via build config 00:03:15.486 00:03:15.486 libs: 00:03:15.486 argparse: explicitly disabled via build config 00:03:15.486 metrics: explicitly disabled via build config 00:03:15.486 acl: explicitly disabled via build config 00:03:15.486 bbdev: explicitly disabled via build config 00:03:15.486 bitratestats: explicitly disabled via build config 00:03:15.486 bpf: explicitly disabled via build config 00:03:15.486 cfgfile: explicitly disabled via build config 00:03:15.486 distributor: explicitly disabled via build config 00:03:15.486 efd: explicitly disabled via build config 00:03:15.486 eventdev: explicitly disabled via build config 00:03:15.486 dispatcher: explicitly disabled via build config 00:03:15.486 gpudev: explicitly disabled via build config 00:03:15.486 gro: explicitly disabled via build config 00:03:15.486 gso: explicitly disabled via build config 00:03:15.486 ip_frag: explicitly disabled via build config 00:03:15.486 jobstats: explicitly disabled via build config 00:03:15.486 latencystats: explicitly disabled via build config 00:03:15.486 lpm: explicitly disabled via build config 00:03:15.486 member: explicitly disabled via build config 00:03:15.486 pcapng: explicitly disabled via build config 00:03:15.486 rawdev: explicitly disabled via build config 00:03:15.486 regexdev: explicitly disabled via build config 00:03:15.486 mldev: explicitly disabled via build config 00:03:15.486 rib: explicitly disabled via build config 00:03:15.486 sched: explicitly disabled via build config 00:03:15.486 stack: explicitly disabled via build config 00:03:15.486 ipsec: explicitly disabled via build config 00:03:15.486 pdcp: explicitly disabled via build config 00:03:15.486 fib: explicitly disabled via build config 00:03:15.486 port: explicitly disabled via build config 00:03:15.486 pdump: explicitly disabled via build config 00:03:15.486 table: explicitly disabled via build config 00:03:15.486 pipeline: explicitly disabled via build config 00:03:15.486 graph: explicitly disabled via build config 00:03:15.486 node: explicitly disabled via build config 00:03:15.486 00:03:15.486 drivers: 00:03:15.486 common/cpt: not in enabled drivers build config 00:03:15.486 common/dpaax: not in enabled drivers build config 00:03:15.486 common/iavf: not in enabled drivers build config 00:03:15.486 common/idpf: not in enabled drivers build config 00:03:15.486 common/ionic: not in enabled drivers build config 00:03:15.486 common/mvep: not in enabled drivers build config 00:03:15.486 common/octeontx: not in enabled drivers build config 00:03:15.486 bus/auxiliary: not in enabled drivers build config 00:03:15.486 bus/cdx: not in enabled drivers build config 00:03:15.486 bus/dpaa: not in enabled drivers build config 00:03:15.487 bus/fslmc: not in enabled drivers build config 00:03:15.487 bus/ifpga: not in enabled drivers build config 00:03:15.487 bus/platform: not in enabled drivers build config 00:03:15.487 bus/uacce: not in enabled drivers build config 00:03:15.487 bus/vmbus: not in enabled drivers build config 00:03:15.487 common/cnxk: not in enabled drivers build config 00:03:15.487 common/mlx5: not in enabled drivers build config 00:03:15.487 common/nfp: not in enabled drivers build config 00:03:15.487 common/nitrox: not in enabled drivers build config 00:03:15.487 common/qat: not in enabled drivers build config 00:03:15.487 common/sfc_efx: not in enabled drivers build config 00:03:15.487 mempool/bucket: not in enabled drivers build config 00:03:15.487 mempool/cnxk: not in enabled drivers build config 00:03:15.487 mempool/dpaa: not in enabled drivers build config 00:03:15.487 mempool/dpaa2: not in enabled drivers build config 00:03:15.487 mempool/octeontx: not in enabled drivers build config 00:03:15.487 mempool/stack: not in enabled drivers build config 00:03:15.487 dma/cnxk: not in enabled drivers build config 00:03:15.487 dma/dpaa: not in enabled drivers build config 00:03:15.487 dma/dpaa2: not in enabled drivers build config 00:03:15.487 dma/hisilicon: not in enabled drivers build config 00:03:15.487 dma/idxd: not in enabled drivers build config 00:03:15.487 dma/ioat: not in enabled drivers build config 00:03:15.487 dma/skeleton: not in enabled drivers build config 00:03:15.487 net/af_packet: not in enabled drivers build config 00:03:15.487 net/af_xdp: not in enabled drivers build config 00:03:15.487 net/ark: not in enabled drivers build config 00:03:15.487 net/atlantic: not in enabled drivers build config 00:03:15.487 net/avp: not in enabled drivers build config 00:03:15.487 net/axgbe: not in enabled drivers build config 00:03:15.487 net/bnx2x: not in enabled drivers build config 00:03:15.487 net/bnxt: not in enabled drivers build config 00:03:15.487 net/bonding: not in enabled drivers build config 00:03:15.487 net/cnxk: not in enabled drivers build config 00:03:15.487 net/cpfl: not in enabled drivers build config 00:03:15.487 net/cxgbe: not in enabled drivers build config 00:03:15.487 net/dpaa: not in enabled drivers build config 00:03:15.487 net/dpaa2: not in enabled drivers build config 00:03:15.487 net/e1000: not in enabled drivers build config 00:03:15.487 net/ena: not in enabled drivers build config 00:03:15.487 net/enetc: not in enabled drivers build config 00:03:15.487 net/enetfec: not in enabled drivers build config 00:03:15.487 net/enic: not in enabled drivers build config 00:03:15.487 net/failsafe: not in enabled drivers build config 00:03:15.487 net/fm10k: not in enabled drivers build config 00:03:15.487 net/gve: not in enabled drivers build config 00:03:15.487 net/hinic: not in enabled drivers build config 00:03:15.487 net/hns3: not in enabled drivers build config 00:03:15.487 net/i40e: not in enabled drivers build config 00:03:15.487 net/iavf: not in enabled drivers build config 00:03:15.487 net/ice: not in enabled drivers build config 00:03:15.487 net/idpf: not in enabled drivers build config 00:03:15.487 net/igc: not in enabled drivers build config 00:03:15.487 net/ionic: not in enabled drivers build config 00:03:15.487 net/ipn3ke: not in enabled drivers build config 00:03:15.487 net/ixgbe: not in enabled drivers build config 00:03:15.487 net/mana: not in enabled drivers build config 00:03:15.487 net/memif: not in enabled drivers build config 00:03:15.487 net/mlx4: not in enabled drivers build config 00:03:15.487 net/mlx5: not in enabled drivers build config 00:03:15.487 net/mvneta: not in enabled drivers build config 00:03:15.487 net/mvpp2: not in enabled drivers build config 00:03:15.487 net/netvsc: not in enabled drivers build config 00:03:15.487 net/nfb: not in enabled drivers build config 00:03:15.487 net/nfp: not in enabled drivers build config 00:03:15.487 net/ngbe: not in enabled drivers build config 00:03:15.487 net/null: not in enabled drivers build config 00:03:15.487 net/octeontx: not in enabled drivers build config 00:03:15.487 net/octeon_ep: not in enabled drivers build config 00:03:15.487 net/pcap: not in enabled drivers build config 00:03:15.487 net/pfe: not in enabled drivers build config 00:03:15.487 net/qede: not in enabled drivers build config 00:03:15.487 net/ring: not in enabled drivers build config 00:03:15.487 net/sfc: not in enabled drivers build config 00:03:15.487 net/softnic: not in enabled drivers build config 00:03:15.487 net/tap: not in enabled drivers build config 00:03:15.487 net/thunderx: not in enabled drivers build config 00:03:15.487 net/txgbe: not in enabled drivers build config 00:03:15.487 net/vdev_netvsc: not in enabled drivers build config 00:03:15.487 net/vhost: not in enabled drivers build config 00:03:15.487 net/virtio: not in enabled drivers build config 00:03:15.487 net/vmxnet3: not in enabled drivers build config 00:03:15.487 raw/*: missing internal dependency, "rawdev" 00:03:15.487 crypto/armv8: not in enabled drivers build config 00:03:15.487 crypto/bcmfs: not in enabled drivers build config 00:03:15.487 crypto/caam_jr: not in enabled drivers build config 00:03:15.487 crypto/ccp: not in enabled drivers build config 00:03:15.487 crypto/cnxk: not in enabled drivers build config 00:03:15.487 crypto/dpaa_sec: not in enabled drivers build config 00:03:15.487 crypto/dpaa2_sec: not in enabled drivers build config 00:03:15.487 crypto/ipsec_mb: not in enabled drivers build config 00:03:15.487 crypto/mlx5: not in enabled drivers build config 00:03:15.487 crypto/mvsam: not in enabled drivers build config 00:03:15.487 crypto/nitrox: not in enabled drivers build config 00:03:15.487 crypto/null: not in enabled drivers build config 00:03:15.487 crypto/octeontx: not in enabled drivers build config 00:03:15.487 crypto/openssl: not in enabled drivers build config 00:03:15.487 crypto/scheduler: not in enabled drivers build config 00:03:15.487 crypto/uadk: not in enabled drivers build config 00:03:15.487 crypto/virtio: not in enabled drivers build config 00:03:15.487 compress/isal: not in enabled drivers build config 00:03:15.487 compress/mlx5: not in enabled drivers build config 00:03:15.487 compress/nitrox: not in enabled drivers build config 00:03:15.487 compress/octeontx: not in enabled drivers build config 00:03:15.487 compress/zlib: not in enabled drivers build config 00:03:15.487 regex/*: missing internal dependency, "regexdev" 00:03:15.487 ml/*: missing internal dependency, "mldev" 00:03:15.487 vdpa/ifc: not in enabled drivers build config 00:03:15.487 vdpa/mlx5: not in enabled drivers build config 00:03:15.487 vdpa/nfp: not in enabled drivers build config 00:03:15.487 vdpa/sfc: not in enabled drivers build config 00:03:15.487 event/*: missing internal dependency, "eventdev" 00:03:15.487 baseband/*: missing internal dependency, "bbdev" 00:03:15.487 gpu/*: missing internal dependency, "gpudev" 00:03:15.487 00:03:15.487 00:03:15.487 Build targets in project: 85 00:03:15.487 00:03:15.487 DPDK 24.03.0 00:03:15.487 00:03:15.487 User defined options 00:03:15.487 buildtype : debug 00:03:15.487 default_library : shared 00:03:15.487 libdir : lib 00:03:15.487 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:15.487 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:15.487 c_link_args : 00:03:15.487 cpu_instruction_set: native 00:03:15.487 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:15.487 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:15.487 enable_docs : false 00:03:15.487 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:15.487 enable_kmods : false 00:03:15.487 max_lcores : 128 00:03:15.487 tests : false 00:03:15.487 00:03:15.487 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:15.487 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:15.487 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:15.487 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:15.487 [3/268] Linking static target lib/librte_kvargs.a 00:03:15.487 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:15.487 [5/268] Linking static target lib/librte_log.a 00:03:15.487 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:15.487 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.487 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:15.487 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:15.487 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:15.487 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:15.487 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:15.487 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:15.487 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:15.487 [15/268] Linking static target lib/librte_telemetry.a 00:03:15.487 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:15.487 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.745 [18/268] Linking target lib/librte_log.so.24.1 00:03:15.745 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:15.745 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:16.003 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:16.003 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:16.261 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:16.518 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:16.518 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:16.518 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:16.776 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.776 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:16.776 [29/268] Linking target lib/librte_telemetry.so.24.1 00:03:16.776 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:16.776 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:17.034 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:17.034 [33/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:17.291 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:17.291 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:17.291 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:17.291 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:17.858 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:17.858 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:17.858 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:18.116 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:18.116 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:18.116 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:18.116 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:18.374 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:18.632 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:18.632 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:18.889 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:18.889 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:19.147 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:19.147 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:19.147 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:19.147 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:19.147 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:19.713 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:19.713 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:19.969 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:19.969 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:20.225 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:20.225 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:20.225 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:20.225 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:20.482 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:20.482 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:21.048 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:21.048 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:21.049 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:21.306 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:21.306 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:21.563 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:21.820 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:21.820 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:21.820 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:22.077 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:22.077 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:22.077 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:22.335 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:22.335 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:22.335 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:22.592 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:22.592 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:22.849 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:23.107 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:23.364 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:23.364 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:23.364 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:23.621 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:23.621 [88/268] Linking static target lib/librte_eal.a 00:03:23.621 [89/268] Linking static target lib/librte_ring.a 00:03:23.879 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:23.879 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:23.879 [92/268] Linking static target lib/librte_rcu.a 00:03:23.879 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:23.879 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:23.879 [95/268] Linking static target lib/librte_mempool.a 00:03:24.136 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:24.136 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:24.403 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:24.404 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.667 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:24.667 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:24.667 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.667 [103/268] Linking static target lib/librte_mbuf.a 00:03:24.924 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:24.924 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:25.489 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:25.489 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:25.747 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.005 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:26.005 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:26.005 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:26.005 [112/268] Linking static target lib/librte_meter.a 00:03:26.005 [113/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.005 [114/268] Linking static target lib/librte_net.a 00:03:26.005 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:26.262 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:26.520 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:26.520 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.520 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.454 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:27.712 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:27.970 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:27.970 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.228 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.228 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:28.228 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.228 [127/268] Linking static target lib/librte_pci.a 00:03:28.485 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.485 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:28.485 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.485 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:28.743 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:28.743 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:29.001 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:29.001 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:29.001 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:29.001 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.001 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:29.001 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:29.001 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:29.001 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:29.001 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.001 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:29.259 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:29.259 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:29.259 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:29.259 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:29.517 [148/268] Linking static target lib/librte_ethdev.a 00:03:29.774 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:29.774 [150/268] Linking static target lib/librte_cmdline.a 00:03:30.031 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:30.288 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:30.288 [153/268] Linking static target lib/librte_timer.a 00:03:30.288 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:30.288 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:30.556 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:30.556 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:30.814 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:30.814 [159/268] Linking static target lib/librte_hash.a 00:03:31.072 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.329 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:31.329 [162/268] Linking static target lib/librte_compressdev.a 00:03:31.329 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:31.587 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:31.587 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:31.845 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:31.845 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:31.845 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.102 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:32.102 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:32.359 [171/268] Linking static target lib/librte_dmadev.a 00:03:32.359 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:32.617 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.617 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.875 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:32.875 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:32.875 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:32.875 [178/268] Linking static target lib/librte_cryptodev.a 00:03:33.134 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:33.392 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:33.392 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:33.392 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.650 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:33.912 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:33.912 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:33.912 [186/268] Linking static target lib/librte_reorder.a 00:03:34.477 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:34.477 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:34.477 [189/268] Linking static target lib/librte_security.a 00:03:34.477 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:34.735 [191/268] Linking static target lib/librte_power.a 00:03:34.735 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:34.735 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:34.992 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.250 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:35.567 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.840 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:36.098 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:36.098 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.098 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.355 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:36.355 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:36.613 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:36.871 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:36.871 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:37.130 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:37.130 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:37.389 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:37.390 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:37.390 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:37.390 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:37.648 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:37.648 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:37.648 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:37.648 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:37.648 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:37.648 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:37.648 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:37.648 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:37.648 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:37.906 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:37.906 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.164 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.164 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:38.164 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.164 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.165 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:38.165 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.731 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.731 [230/268] Linking target lib/librte_eal.so.24.1 00:03:38.990 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.990 [232/268] Linking target lib/librte_dmadev.so.24.1 00:03:38.990 [233/268] Linking target lib/librte_meter.so.24.1 00:03:38.990 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:38.990 [235/268] Linking target lib/librte_pci.so.24.1 00:03:38.990 [236/268] Linking target lib/librte_timer.so.24.1 00:03:38.990 [237/268] Linking target lib/librte_ring.so.24.1 00:03:38.990 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:38.990 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:38.990 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:38.990 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:38.990 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:39.249 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:39.249 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:39.249 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:39.249 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:39.249 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:39.249 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:39.507 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:39.507 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:39.507 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:03:39.507 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:39.507 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:39.766 [254/268] Linking target lib/librte_net.so.24.1 00:03:39.766 [255/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.766 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:39.766 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:39.766 [258/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:39.766 [259/268] Linking target lib/librte_hash.so.24.1 00:03:39.766 [260/268] Linking target lib/librte_security.so.24.1 00:03:39.766 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:39.766 [262/268] Linking static target lib/librte_vhost.a 00:03:39.766 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:40.024 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:40.025 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:40.025 [266/268] Linking target lib/librte_power.so.24.1 00:03:41.398 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.398 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:41.398 INFO: autodetecting backend as ninja 00:03:41.398 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:42.782 CC lib/ut_mock/mock.o 00:03:42.782 CC lib/ut/ut.o 00:03:42.782 CC lib/log/log.o 00:03:42.782 CC lib/log/log_deprecated.o 00:03:42.782 CC lib/log/log_flags.o 00:03:42.782 LIB libspdk_log.a 00:03:42.782 LIB libspdk_ut.a 00:03:42.782 SO libspdk_ut.so.2.0 00:03:42.782 SO libspdk_log.so.7.0 00:03:42.782 LIB libspdk_ut_mock.a 00:03:43.040 SYMLINK libspdk_ut.so 00:03:43.040 SO libspdk_ut_mock.so.6.0 00:03:43.040 SYMLINK libspdk_log.so 00:03:43.040 SYMLINK libspdk_ut_mock.so 00:03:43.040 CXX lib/trace_parser/trace.o 00:03:43.040 CC lib/util/base64.o 00:03:43.040 CC lib/util/bit_array.o 00:03:43.040 CC lib/util/cpuset.o 00:03:43.040 CC lib/ioat/ioat.o 00:03:43.040 CC lib/dma/dma.o 00:03:43.040 CC lib/util/crc16.o 00:03:43.040 CC lib/util/crc32.o 00:03:43.040 CC lib/util/crc32c.o 00:03:43.298 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.298 CC lib/vfio_user/host/vfio_user.o 00:03:43.298 LIB libspdk_dma.a 00:03:43.298 SO libspdk_dma.so.4.0 00:03:43.298 CC lib/util/crc32_ieee.o 00:03:43.556 CC lib/util/crc64.o 00:03:43.556 CC lib/util/dif.o 00:03:43.556 CC lib/util/fd.o 00:03:43.556 SYMLINK libspdk_dma.so 00:03:43.556 CC lib/util/fd_group.o 00:03:43.556 LIB libspdk_ioat.a 00:03:43.556 CC lib/util/file.o 00:03:43.556 CC lib/util/hexlify.o 00:03:43.556 SO libspdk_ioat.so.7.0 00:03:43.556 SYMLINK libspdk_ioat.so 00:03:43.556 CC lib/util/iov.o 00:03:43.556 CC lib/util/math.o 00:03:43.556 LIB libspdk_vfio_user.a 00:03:43.556 SO libspdk_vfio_user.so.5.0 00:03:43.556 CC lib/util/net.o 00:03:43.556 CC lib/util/pipe.o 00:03:43.556 CC lib/util/strerror_tls.o 00:03:43.556 CC lib/util/string.o 00:03:43.815 SYMLINK libspdk_vfio_user.so 00:03:43.815 CC lib/util/uuid.o 00:03:43.815 CC lib/util/xor.o 00:03:43.815 CC lib/util/zipf.o 00:03:44.072 LIB libspdk_util.a 00:03:44.336 LIB libspdk_trace_parser.a 00:03:44.336 SO libspdk_util.so.9.1 00:03:44.336 SO libspdk_trace_parser.so.5.0 00:03:44.336 SYMLINK libspdk_trace_parser.so 00:03:44.336 SYMLINK libspdk_util.so 00:03:44.599 CC lib/conf/conf.o 00:03:44.599 CC lib/idxd/idxd.o 00:03:44.599 CC lib/idxd/idxd_user.o 00:03:44.599 CC lib/rdma_utils/rdma_utils.o 00:03:44.599 CC lib/idxd/idxd_kernel.o 00:03:44.599 CC lib/vmd/vmd.o 00:03:44.599 CC lib/vmd/led.o 00:03:44.599 CC lib/json/json_parse.o 00:03:44.599 CC lib/rdma_provider/common.o 00:03:44.599 CC lib/env_dpdk/env.o 00:03:44.856 CC lib/json/json_util.o 00:03:44.856 CC lib/json/json_write.o 00:03:44.856 LIB libspdk_rdma_utils.a 00:03:44.856 SO libspdk_rdma_utils.so.1.0 00:03:44.856 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:45.113 LIB libspdk_conf.a 00:03:45.113 SYMLINK libspdk_rdma_utils.so 00:03:45.113 CC lib/env_dpdk/memory.o 00:03:45.113 CC lib/env_dpdk/pci.o 00:03:45.113 SO libspdk_conf.so.6.0 00:03:45.113 CC lib/env_dpdk/init.o 00:03:45.113 CC lib/env_dpdk/threads.o 00:03:45.113 LIB libspdk_json.a 00:03:45.113 SYMLINK libspdk_conf.so 00:03:45.113 CC lib/env_dpdk/pci_ioat.o 00:03:45.113 SO libspdk_json.so.6.0 00:03:45.113 SYMLINK libspdk_json.so 00:03:45.371 CC lib/env_dpdk/pci_virtio.o 00:03:45.371 CC lib/env_dpdk/pci_vmd.o 00:03:45.371 LIB libspdk_rdma_provider.a 00:03:45.371 SO libspdk_rdma_provider.so.6.0 00:03:45.371 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.371 CC lib/env_dpdk/pci_idxd.o 00:03:45.371 CC lib/env_dpdk/pci_event.o 00:03:45.371 SYMLINK libspdk_rdma_provider.so 00:03:45.371 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.371 LIB libspdk_idxd.a 00:03:45.371 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.629 SO libspdk_idxd.so.12.0 00:03:45.629 CC lib/env_dpdk/sigbus_handler.o 00:03:45.629 CC lib/env_dpdk/pci_dpdk.o 00:03:45.629 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.629 SYMLINK libspdk_idxd.so 00:03:45.629 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.629 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.629 LIB libspdk_vmd.a 00:03:45.629 SO libspdk_vmd.so.6.0 00:03:45.887 SYMLINK libspdk_vmd.so 00:03:45.887 LIB libspdk_jsonrpc.a 00:03:46.144 SO libspdk_jsonrpc.so.6.0 00:03:46.144 SYMLINK libspdk_jsonrpc.so 00:03:46.402 CC lib/rpc/rpc.o 00:03:46.402 LIB libspdk_env_dpdk.a 00:03:46.660 LIB libspdk_rpc.a 00:03:46.660 SO libspdk_env_dpdk.so.15.0 00:03:46.660 SO libspdk_rpc.so.6.0 00:03:46.660 SYMLINK libspdk_rpc.so 00:03:46.660 SYMLINK libspdk_env_dpdk.so 00:03:46.918 CC lib/trace/trace.o 00:03:46.918 CC lib/trace/trace_flags.o 00:03:46.918 CC lib/trace/trace_rpc.o 00:03:46.918 CC lib/keyring/keyring.o 00:03:46.918 CC lib/notify/notify.o 00:03:46.918 CC lib/keyring/keyring_rpc.o 00:03:46.918 CC lib/notify/notify_rpc.o 00:03:47.176 LIB libspdk_notify.a 00:03:47.176 SO libspdk_notify.so.6.0 00:03:47.176 LIB libspdk_trace.a 00:03:47.176 LIB libspdk_keyring.a 00:03:47.176 SYMLINK libspdk_notify.so 00:03:47.176 SO libspdk_trace.so.10.0 00:03:47.434 SO libspdk_keyring.so.1.0 00:03:47.434 SYMLINK libspdk_keyring.so 00:03:47.434 SYMLINK libspdk_trace.so 00:03:47.691 CC lib/thread/thread.o 00:03:47.691 CC lib/thread/iobuf.o 00:03:47.691 CC lib/sock/sock.o 00:03:47.691 CC lib/sock/sock_rpc.o 00:03:47.949 LIB libspdk_sock.a 00:03:48.207 SO libspdk_sock.so.10.0 00:03:48.207 SYMLINK libspdk_sock.so 00:03:48.464 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.464 CC lib/nvme/nvme_ctrlr.o 00:03:48.464 CC lib/nvme/nvme_ns_cmd.o 00:03:48.464 CC lib/nvme/nvme_fabric.o 00:03:48.464 CC lib/nvme/nvme_ns.o 00:03:48.464 CC lib/nvme/nvme_pcie.o 00:03:48.464 CC lib/nvme/nvme_pcie_common.o 00:03:48.464 CC lib/nvme/nvme_qpair.o 00:03:48.464 CC lib/nvme/nvme.o 00:03:49.399 CC lib/nvme/nvme_quirks.o 00:03:49.399 LIB libspdk_thread.a 00:03:49.399 SO libspdk_thread.so.10.1 00:03:49.658 CC lib/nvme/nvme_transport.o 00:03:49.658 CC lib/nvme/nvme_discovery.o 00:03:49.658 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.658 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.658 SYMLINK libspdk_thread.so 00:03:49.658 CC lib/nvme/nvme_tcp.o 00:03:49.658 CC lib/nvme/nvme_opal.o 00:03:49.658 CC lib/nvme/nvme_io_msg.o 00:03:49.916 CC lib/nvme/nvme_poll_group.o 00:03:49.916 CC lib/nvme/nvme_zns.o 00:03:50.482 CC lib/accel/accel.o 00:03:50.482 CC lib/accel/accel_rpc.o 00:03:50.482 CC lib/accel/accel_sw.o 00:03:50.740 CC lib/nvme/nvme_stubs.o 00:03:50.741 CC lib/nvme/nvme_auth.o 00:03:50.741 CC lib/nvme/nvme_cuse.o 00:03:50.998 CC lib/init/json_config.o 00:03:50.998 CC lib/blob/blobstore.o 00:03:50.998 CC lib/blob/request.o 00:03:50.998 CC lib/virtio/virtio.o 00:03:50.998 CC lib/virtio/virtio_vhost_user.o 00:03:51.256 CC lib/init/subsystem.o 00:03:51.514 CC lib/blob/zeroes.o 00:03:51.514 CC lib/init/subsystem_rpc.o 00:03:51.514 CC lib/blob/blob_bs_dev.o 00:03:51.772 CC lib/virtio/virtio_vfio_user.o 00:03:51.772 CC lib/virtio/virtio_pci.o 00:03:51.772 CC lib/init/rpc.o 00:03:51.772 LIB libspdk_accel.a 00:03:51.772 CC lib/nvme/nvme_rdma.o 00:03:51.772 SO libspdk_accel.so.15.1 00:03:52.030 SYMLINK libspdk_accel.so 00:03:52.030 LIB libspdk_init.a 00:03:52.030 SO libspdk_init.so.5.0 00:03:52.030 SYMLINK libspdk_init.so 00:03:52.030 CC lib/bdev/bdev.o 00:03:52.030 CC lib/bdev/bdev_rpc.o 00:03:52.030 CC lib/bdev/bdev_zone.o 00:03:52.288 LIB libspdk_virtio.a 00:03:52.288 SO libspdk_virtio.so.7.0 00:03:52.288 CC lib/bdev/part.o 00:03:52.288 CC lib/bdev/scsi_nvme.o 00:03:52.288 CC lib/event/app.o 00:03:52.288 CC lib/event/reactor.o 00:03:52.288 SYMLINK libspdk_virtio.so 00:03:52.288 CC lib/event/log_rpc.o 00:03:52.546 CC lib/event/app_rpc.o 00:03:52.546 CC lib/event/scheduler_static.o 00:03:53.110 LIB libspdk_event.a 00:03:53.110 SO libspdk_event.so.14.0 00:03:53.110 SYMLINK libspdk_event.so 00:03:53.675 LIB libspdk_nvme.a 00:03:53.675 SO libspdk_nvme.so.13.1 00:03:54.240 SYMLINK libspdk_nvme.so 00:03:54.805 LIB libspdk_blob.a 00:03:54.805 SO libspdk_blob.so.11.0 00:03:54.805 SYMLINK libspdk_blob.so 00:03:55.064 CC lib/blobfs/blobfs.o 00:03:55.064 CC lib/blobfs/tree.o 00:03:55.064 CC lib/lvol/lvol.o 00:03:55.064 LIB libspdk_bdev.a 00:03:55.320 SO libspdk_bdev.so.16.0 00:03:55.320 SYMLINK libspdk_bdev.so 00:03:55.578 CC lib/nbd/nbd.o 00:03:55.578 CC lib/ublk/ublk.o 00:03:55.578 CC lib/nbd/nbd_rpc.o 00:03:55.578 CC lib/ublk/ublk_rpc.o 00:03:55.578 CC lib/nvmf/ctrlr.o 00:03:55.578 CC lib/ftl/ftl_core.o 00:03:55.578 CC lib/nvmf/ctrlr_discovery.o 00:03:55.578 CC lib/scsi/dev.o 00:03:55.841 CC lib/nvmf/ctrlr_bdev.o 00:03:55.841 CC lib/nvmf/subsystem.o 00:03:55.841 LIB libspdk_blobfs.a 00:03:56.110 SO libspdk_blobfs.so.10.0 00:03:56.110 CC lib/scsi/lun.o 00:03:56.110 SYMLINK libspdk_blobfs.so 00:03:56.110 CC lib/scsi/port.o 00:03:56.374 LIB libspdk_lvol.a 00:03:56.374 CC lib/ftl/ftl_init.o 00:03:56.374 LIB libspdk_nbd.a 00:03:56.374 SO libspdk_lvol.so.10.0 00:03:56.374 SO libspdk_nbd.so.7.0 00:03:56.374 CC lib/ftl/ftl_layout.o 00:03:56.374 SYMLINK libspdk_lvol.so 00:03:56.374 CC lib/ftl/ftl_debug.o 00:03:56.374 CC lib/nvmf/nvmf.o 00:03:56.374 SYMLINK libspdk_nbd.so 00:03:56.631 CC lib/scsi/scsi.o 00:03:56.631 CC lib/scsi/scsi_bdev.o 00:03:56.631 LIB libspdk_ublk.a 00:03:56.631 CC lib/nvmf/nvmf_rpc.o 00:03:56.631 CC lib/nvmf/transport.o 00:03:56.631 SO libspdk_ublk.so.3.0 00:03:56.631 CC lib/ftl/ftl_io.o 00:03:56.889 CC lib/ftl/ftl_sb.o 00:03:56.889 SYMLINK libspdk_ublk.so 00:03:56.889 CC lib/ftl/ftl_l2p.o 00:03:56.889 CC lib/ftl/ftl_l2p_flat.o 00:03:57.147 CC lib/nvmf/tcp.o 00:03:57.147 CC lib/ftl/ftl_nv_cache.o 00:03:57.147 CC lib/ftl/ftl_band.o 00:03:57.147 CC lib/ftl/ftl_band_ops.o 00:03:57.147 CC lib/scsi/scsi_pr.o 00:03:57.405 CC lib/nvmf/stubs.o 00:03:57.663 CC lib/scsi/scsi_rpc.o 00:03:57.663 CC lib/scsi/task.o 00:03:57.663 CC lib/nvmf/mdns_server.o 00:03:57.663 CC lib/ftl/ftl_writer.o 00:03:57.921 CC lib/nvmf/rdma.o 00:03:57.921 CC lib/ftl/ftl_rq.o 00:03:57.921 LIB libspdk_scsi.a 00:03:57.921 CC lib/ftl/ftl_reloc.o 00:03:57.921 CC lib/ftl/ftl_l2p_cache.o 00:03:57.921 SO libspdk_scsi.so.9.0 00:03:58.178 CC lib/ftl/ftl_p2l.o 00:03:58.178 CC lib/nvmf/auth.o 00:03:58.178 SYMLINK libspdk_scsi.so 00:03:58.178 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.435 CC lib/iscsi/conn.o 00:03:58.435 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.435 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.693 CC lib/iscsi/init_grp.o 00:03:58.693 CC lib/vhost/vhost.o 00:03:58.693 CC lib/vhost/vhost_rpc.o 00:03:58.693 CC lib/vhost/vhost_scsi.o 00:03:58.950 CC lib/vhost/vhost_blk.o 00:03:58.950 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.950 CC lib/vhost/rte_vhost_user.o 00:03:58.950 CC lib/iscsi/iscsi.o 00:03:58.950 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:59.208 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:59.208 CC lib/iscsi/md5.o 00:03:59.466 CC lib/iscsi/param.o 00:03:59.466 CC lib/iscsi/portal_grp.o 00:03:59.466 CC lib/iscsi/tgt_node.o 00:03:59.724 CC lib/iscsi/iscsi_subsystem.o 00:03:59.724 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:59.724 CC lib/iscsi/iscsi_rpc.o 00:03:59.724 CC lib/iscsi/task.o 00:03:59.724 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:59.981 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:59.981 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:59.981 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:59.981 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:59.981 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:00.239 CC lib/ftl/utils/ftl_conf.o 00:04:00.239 CC lib/ftl/utils/ftl_md.o 00:04:00.239 CC lib/ftl/utils/ftl_mempool.o 00:04:00.239 CC lib/ftl/utils/ftl_bitmap.o 00:04:00.239 CC lib/ftl/utils/ftl_property.o 00:04:00.239 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:00.498 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:00.498 LIB libspdk_vhost.a 00:04:00.498 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:00.498 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:00.498 SO libspdk_vhost.so.8.0 00:04:00.498 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:00.755 SYMLINK libspdk_vhost.so 00:04:00.755 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:00.755 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:00.755 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:00.755 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:00.755 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:00.755 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:00.755 CC lib/ftl/base/ftl_base_dev.o 00:04:00.755 CC lib/ftl/base/ftl_base_bdev.o 00:04:01.013 CC lib/ftl/ftl_trace.o 00:04:01.270 LIB libspdk_ftl.a 00:04:01.270 LIB libspdk_iscsi.a 00:04:01.270 LIB libspdk_nvmf.a 00:04:01.270 SO libspdk_iscsi.so.8.0 00:04:01.270 SO libspdk_nvmf.so.18.1 00:04:01.270 SO libspdk_ftl.so.9.0 00:04:01.528 SYMLINK libspdk_iscsi.so 00:04:01.528 SYMLINK libspdk_nvmf.so 00:04:01.785 SYMLINK libspdk_ftl.so 00:04:02.044 CC module/env_dpdk/env_dpdk_rpc.o 00:04:02.302 CC module/accel/iaa/accel_iaa.o 00:04:02.302 CC module/accel/error/accel_error.o 00:04:02.302 CC module/accel/ioat/accel_ioat.o 00:04:02.302 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:02.302 CC module/sock/posix/posix.o 00:04:02.302 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:02.302 CC module/blob/bdev/blob_bdev.o 00:04:02.302 CC module/accel/dsa/accel_dsa.o 00:04:02.302 CC module/keyring/file/keyring.o 00:04:02.302 LIB libspdk_env_dpdk_rpc.a 00:04:02.302 SO libspdk_env_dpdk_rpc.so.6.0 00:04:02.302 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.302 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.559 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:02.559 CC module/keyring/file/keyring_rpc.o 00:04:02.559 CC module/accel/error/accel_error_rpc.o 00:04:02.559 CC module/accel/ioat/accel_ioat_rpc.o 00:04:02.559 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.559 LIB libspdk_blob_bdev.a 00:04:02.559 CC module/accel/iaa/accel_iaa_rpc.o 00:04:02.559 CC module/accel/dsa/accel_dsa_rpc.o 00:04:02.559 SO libspdk_blob_bdev.so.11.0 00:04:02.560 LIB libspdk_scheduler_dynamic.a 00:04:02.560 CC module/keyring/linux/keyring.o 00:04:02.560 SO libspdk_scheduler_dynamic.so.4.0 00:04:02.560 CC module/keyring/linux/keyring_rpc.o 00:04:02.560 SYMLINK libspdk_blob_bdev.so 00:04:02.560 LIB libspdk_keyring_file.a 00:04:02.560 LIB libspdk_accel_ioat.a 00:04:02.818 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.818 SO libspdk_keyring_file.so.1.0 00:04:02.818 SO libspdk_accel_ioat.so.6.0 00:04:02.818 LIB libspdk_accel_iaa.a 00:04:02.818 LIB libspdk_keyring_linux.a 00:04:02.818 LIB libspdk_accel_error.a 00:04:02.818 SYMLINK libspdk_keyring_file.so 00:04:02.818 SO libspdk_accel_iaa.so.3.0 00:04:02.818 LIB libspdk_accel_dsa.a 00:04:02.818 SYMLINK libspdk_accel_ioat.so 00:04:02.818 SO libspdk_keyring_linux.so.1.0 00:04:02.818 SO libspdk_accel_error.so.2.0 00:04:02.818 SO libspdk_accel_dsa.so.5.0 00:04:02.818 SYMLINK libspdk_accel_iaa.so 00:04:02.818 CC module/scheduler/gscheduler/gscheduler.o 00:04:02.818 SYMLINK libspdk_keyring_linux.so 00:04:02.818 SYMLINK libspdk_accel_dsa.so 00:04:03.077 SYMLINK libspdk_accel_error.so 00:04:03.077 CC module/bdev/delay/vbdev_delay.o 00:04:03.077 CC module/bdev/gpt/gpt.o 00:04:03.077 CC module/bdev/error/vbdev_error.o 00:04:03.077 CC module/blobfs/bdev/blobfs_bdev.o 00:04:03.077 CC module/bdev/lvol/vbdev_lvol.o 00:04:03.077 CC module/bdev/malloc/bdev_malloc.o 00:04:03.077 CC module/bdev/null/bdev_null.o 00:04:03.077 CC module/bdev/nvme/bdev_nvme.o 00:04:03.077 LIB libspdk_scheduler_gscheduler.a 00:04:03.335 SO libspdk_scheduler_gscheduler.so.4.0 00:04:03.335 CC module/bdev/gpt/vbdev_gpt.o 00:04:03.335 SYMLINK libspdk_scheduler_gscheduler.so 00:04:03.335 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:03.335 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:03.335 CC module/bdev/error/vbdev_error_rpc.o 00:04:03.335 CC module/bdev/null/bdev_null_rpc.o 00:04:03.593 LIB libspdk_sock_posix.a 00:04:03.593 SO libspdk_sock_posix.so.6.0 00:04:03.593 LIB libspdk_bdev_error.a 00:04:03.593 LIB libspdk_bdev_gpt.a 00:04:03.593 SO libspdk_bdev_error.so.6.0 00:04:03.593 LIB libspdk_blobfs_bdev.a 00:04:03.593 SO libspdk_bdev_gpt.so.6.0 00:04:03.593 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:03.593 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:03.593 SO libspdk_blobfs_bdev.so.6.0 00:04:03.593 LIB libspdk_bdev_null.a 00:04:03.593 SO libspdk_bdev_null.so.6.0 00:04:03.593 SYMLINK libspdk_sock_posix.so 00:04:03.593 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:03.593 SYMLINK libspdk_bdev_gpt.so 00:04:03.593 SYMLINK libspdk_bdev_error.so 00:04:03.850 CC module/bdev/nvme/nvme_rpc.o 00:04:03.850 SYMLINK libspdk_blobfs_bdev.so 00:04:03.850 SYMLINK libspdk_bdev_null.so 00:04:03.850 CC module/bdev/passthru/vbdev_passthru.o 00:04:03.850 LIB libspdk_bdev_delay.a 00:04:03.850 CC module/bdev/raid/bdev_raid.o 00:04:03.850 LIB libspdk_bdev_malloc.a 00:04:04.108 CC module/bdev/split/vbdev_split.o 00:04:04.108 CC module/bdev/split/vbdev_split_rpc.o 00:04:04.108 SO libspdk_bdev_delay.so.6.0 00:04:04.108 SO libspdk_bdev_malloc.so.6.0 00:04:04.108 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:04.108 LIB libspdk_bdev_lvol.a 00:04:04.108 SO libspdk_bdev_lvol.so.6.0 00:04:04.108 SYMLINK libspdk_bdev_malloc.so 00:04:04.108 SYMLINK libspdk_bdev_delay.so 00:04:04.108 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:04.364 CC module/bdev/raid/bdev_raid_rpc.o 00:04:04.364 SYMLINK libspdk_bdev_lvol.so 00:04:04.364 LIB libspdk_bdev_passthru.a 00:04:04.364 CC module/bdev/aio/bdev_aio.o 00:04:04.364 LIB libspdk_bdev_split.a 00:04:04.364 SO libspdk_bdev_passthru.so.6.0 00:04:04.364 CC module/bdev/ftl/bdev_ftl.o 00:04:04.364 SO libspdk_bdev_split.so.6.0 00:04:04.364 CC module/bdev/iscsi/bdev_iscsi.o 00:04:04.621 SYMLINK libspdk_bdev_passthru.so 00:04:04.621 SYMLINK libspdk_bdev_split.so 00:04:04.621 CC module/bdev/nvme/bdev_mdns_client.o 00:04:04.621 CC module/bdev/nvme/vbdev_opal.o 00:04:04.621 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:04.621 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:04.621 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:04.877 LIB libspdk_bdev_zone_block.a 00:04:04.877 SO libspdk_bdev_zone_block.so.6.0 00:04:04.877 CC module/bdev/aio/bdev_aio_rpc.o 00:04:04.877 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:04.877 CC module/bdev/raid/bdev_raid_sb.o 00:04:04.877 SYMLINK libspdk_bdev_zone_block.so 00:04:04.877 CC module/bdev/raid/raid0.o 00:04:04.877 CC module/bdev/raid/raid1.o 00:04:04.877 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:05.135 LIB libspdk_bdev_aio.a 00:04:05.135 SO libspdk_bdev_aio.so.6.0 00:04:05.135 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.135 SYMLINK libspdk_bdev_aio.so 00:04:05.135 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:05.135 CC module/bdev/raid/concat.o 00:04:05.392 LIB libspdk_bdev_ftl.a 00:04:05.392 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:05.392 SO libspdk_bdev_ftl.so.6.0 00:04:05.392 LIB libspdk_bdev_iscsi.a 00:04:05.392 SYMLINK libspdk_bdev_ftl.so 00:04:05.392 LIB libspdk_bdev_virtio.a 00:04:05.392 SO libspdk_bdev_iscsi.so.6.0 00:04:05.392 SO libspdk_bdev_virtio.so.6.0 00:04:05.392 LIB libspdk_bdev_raid.a 00:04:05.649 SYMLINK libspdk_bdev_iscsi.so 00:04:05.649 SO libspdk_bdev_raid.so.6.0 00:04:05.649 SYMLINK libspdk_bdev_virtio.so 00:04:05.649 SYMLINK libspdk_bdev_raid.so 00:04:06.214 LIB libspdk_bdev_nvme.a 00:04:06.214 SO libspdk_bdev_nvme.so.7.0 00:04:06.472 SYMLINK libspdk_bdev_nvme.so 00:04:06.731 CC module/event/subsystems/sock/sock.o 00:04:06.731 CC module/event/subsystems/scheduler/scheduler.o 00:04:06.731 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:06.731 CC module/event/subsystems/keyring/keyring.o 00:04:06.731 CC module/event/subsystems/iobuf/iobuf.o 00:04:06.731 CC module/event/subsystems/vmd/vmd.o 00:04:06.731 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:06.731 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:06.990 LIB libspdk_event_sock.a 00:04:06.990 LIB libspdk_event_vhost_blk.a 00:04:06.990 LIB libspdk_event_scheduler.a 00:04:06.990 LIB libspdk_event_keyring.a 00:04:06.990 SO libspdk_event_sock.so.5.0 00:04:06.990 SO libspdk_event_vhost_blk.so.3.0 00:04:06.990 SO libspdk_event_scheduler.so.4.0 00:04:06.990 SO libspdk_event_keyring.so.1.0 00:04:06.990 LIB libspdk_event_vmd.a 00:04:06.990 LIB libspdk_event_iobuf.a 00:04:06.990 SYMLINK libspdk_event_sock.so 00:04:07.249 SYMLINK libspdk_event_vhost_blk.so 00:04:07.249 SYMLINK libspdk_event_keyring.so 00:04:07.249 SO libspdk_event_vmd.so.6.0 00:04:07.249 SYMLINK libspdk_event_scheduler.so 00:04:07.249 SO libspdk_event_iobuf.so.3.0 00:04:07.249 SYMLINK libspdk_event_vmd.so 00:04:07.249 SYMLINK libspdk_event_iobuf.so 00:04:07.508 CC module/event/subsystems/accel/accel.o 00:04:07.508 LIB libspdk_event_accel.a 00:04:07.766 SO libspdk_event_accel.so.6.0 00:04:07.766 SYMLINK libspdk_event_accel.so 00:04:08.025 CC module/event/subsystems/bdev/bdev.o 00:04:08.283 LIB libspdk_event_bdev.a 00:04:08.283 SO libspdk_event_bdev.so.6.0 00:04:08.283 SYMLINK libspdk_event_bdev.so 00:04:08.542 CC module/event/subsystems/nbd/nbd.o 00:04:08.542 CC module/event/subsystems/ublk/ublk.o 00:04:08.542 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.542 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.542 CC module/event/subsystems/scsi/scsi.o 00:04:08.800 LIB libspdk_event_nbd.a 00:04:08.800 LIB libspdk_event_ublk.a 00:04:08.800 SO libspdk_event_nbd.so.6.0 00:04:08.800 LIB libspdk_event_scsi.a 00:04:08.800 SO libspdk_event_ublk.so.3.0 00:04:08.800 SO libspdk_event_scsi.so.6.0 00:04:08.800 LIB libspdk_event_nvmf.a 00:04:08.800 SYMLINK libspdk_event_nbd.so 00:04:08.800 SYMLINK libspdk_event_ublk.so 00:04:08.800 SO libspdk_event_nvmf.so.6.0 00:04:08.800 SYMLINK libspdk_event_scsi.so 00:04:09.066 SYMLINK libspdk_event_nvmf.so 00:04:09.066 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.066 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.336 LIB libspdk_event_vhost_scsi.a 00:04:09.336 LIB libspdk_event_iscsi.a 00:04:09.336 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.336 SO libspdk_event_iscsi.so.6.0 00:04:09.593 SYMLINK libspdk_event_iscsi.so 00:04:09.593 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.593 SO libspdk.so.6.0 00:04:09.593 SYMLINK libspdk.so 00:04:09.851 CXX app/trace/trace.o 00:04:09.851 CC app/trace_record/trace_record.o 00:04:09.851 TEST_HEADER include/spdk/accel.h 00:04:09.851 CC test/rpc_client/rpc_client_test.o 00:04:09.851 TEST_HEADER include/spdk/accel_module.h 00:04:09.851 TEST_HEADER include/spdk/assert.h 00:04:09.851 TEST_HEADER include/spdk/barrier.h 00:04:09.851 TEST_HEADER include/spdk/base64.h 00:04:09.851 TEST_HEADER include/spdk/bdev.h 00:04:09.851 TEST_HEADER include/spdk/bdev_module.h 00:04:09.851 TEST_HEADER include/spdk/bdev_zone.h 00:04:09.851 TEST_HEADER include/spdk/bit_array.h 00:04:09.851 TEST_HEADER include/spdk/bit_pool.h 00:04:09.851 TEST_HEADER include/spdk/blob_bdev.h 00:04:09.851 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:09.851 TEST_HEADER include/spdk/blobfs.h 00:04:09.851 TEST_HEADER include/spdk/blob.h 00:04:09.851 TEST_HEADER include/spdk/conf.h 00:04:09.851 TEST_HEADER include/spdk/config.h 00:04:09.851 TEST_HEADER include/spdk/cpuset.h 00:04:09.851 TEST_HEADER include/spdk/crc16.h 00:04:09.851 TEST_HEADER include/spdk/crc32.h 00:04:09.851 TEST_HEADER include/spdk/crc64.h 00:04:09.851 TEST_HEADER include/spdk/dif.h 00:04:09.851 TEST_HEADER include/spdk/dma.h 00:04:09.851 TEST_HEADER include/spdk/endian.h 00:04:09.851 TEST_HEADER include/spdk/env_dpdk.h 00:04:09.851 TEST_HEADER include/spdk/env.h 00:04:09.851 TEST_HEADER include/spdk/event.h 00:04:09.851 CC app/nvmf_tgt/nvmf_main.o 00:04:09.851 TEST_HEADER include/spdk/fd_group.h 00:04:09.851 TEST_HEADER include/spdk/fd.h 00:04:09.851 TEST_HEADER include/spdk/file.h 00:04:09.851 TEST_HEADER include/spdk/ftl.h 00:04:09.851 TEST_HEADER include/spdk/gpt_spec.h 00:04:09.851 TEST_HEADER include/spdk/hexlify.h 00:04:09.851 TEST_HEADER include/spdk/histogram_data.h 00:04:10.110 TEST_HEADER include/spdk/idxd.h 00:04:10.110 TEST_HEADER include/spdk/idxd_spec.h 00:04:10.110 TEST_HEADER include/spdk/init.h 00:04:10.110 TEST_HEADER include/spdk/ioat.h 00:04:10.110 TEST_HEADER include/spdk/ioat_spec.h 00:04:10.110 CC test/thread/poller_perf/poller_perf.o 00:04:10.110 TEST_HEADER include/spdk/iscsi_spec.h 00:04:10.110 TEST_HEADER include/spdk/json.h 00:04:10.110 TEST_HEADER include/spdk/jsonrpc.h 00:04:10.110 CC examples/util/zipf/zipf.o 00:04:10.110 TEST_HEADER include/spdk/keyring.h 00:04:10.110 TEST_HEADER include/spdk/keyring_module.h 00:04:10.110 TEST_HEADER include/spdk/likely.h 00:04:10.110 TEST_HEADER include/spdk/log.h 00:04:10.110 TEST_HEADER include/spdk/lvol.h 00:04:10.110 TEST_HEADER include/spdk/memory.h 00:04:10.110 TEST_HEADER include/spdk/mmio.h 00:04:10.110 TEST_HEADER include/spdk/nbd.h 00:04:10.110 TEST_HEADER include/spdk/net.h 00:04:10.110 TEST_HEADER include/spdk/notify.h 00:04:10.110 TEST_HEADER include/spdk/nvme.h 00:04:10.110 TEST_HEADER include/spdk/nvme_intel.h 00:04:10.110 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:10.110 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:10.110 TEST_HEADER include/spdk/nvme_spec.h 00:04:10.110 TEST_HEADER include/spdk/nvme_zns.h 00:04:10.110 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:10.110 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:10.110 CC test/app/bdev_svc/bdev_svc.o 00:04:10.110 TEST_HEADER include/spdk/nvmf.h 00:04:10.110 TEST_HEADER include/spdk/nvmf_spec.h 00:04:10.110 TEST_HEADER include/spdk/nvmf_transport.h 00:04:10.110 TEST_HEADER include/spdk/opal.h 00:04:10.110 TEST_HEADER include/spdk/opal_spec.h 00:04:10.110 TEST_HEADER include/spdk/pci_ids.h 00:04:10.110 TEST_HEADER include/spdk/pipe.h 00:04:10.110 TEST_HEADER include/spdk/queue.h 00:04:10.110 TEST_HEADER include/spdk/reduce.h 00:04:10.110 CC test/env/mem_callbacks/mem_callbacks.o 00:04:10.110 TEST_HEADER include/spdk/rpc.h 00:04:10.110 TEST_HEADER include/spdk/scheduler.h 00:04:10.110 TEST_HEADER include/spdk/scsi.h 00:04:10.110 TEST_HEADER include/spdk/scsi_spec.h 00:04:10.110 TEST_HEADER include/spdk/sock.h 00:04:10.110 TEST_HEADER include/spdk/stdinc.h 00:04:10.110 CC test/dma/test_dma/test_dma.o 00:04:10.110 TEST_HEADER include/spdk/string.h 00:04:10.110 TEST_HEADER include/spdk/thread.h 00:04:10.110 TEST_HEADER include/spdk/trace.h 00:04:10.110 TEST_HEADER include/spdk/trace_parser.h 00:04:10.110 TEST_HEADER include/spdk/tree.h 00:04:10.110 TEST_HEADER include/spdk/ublk.h 00:04:10.110 TEST_HEADER include/spdk/util.h 00:04:10.110 TEST_HEADER include/spdk/uuid.h 00:04:10.110 TEST_HEADER include/spdk/version.h 00:04:10.110 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:10.110 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:10.110 TEST_HEADER include/spdk/vhost.h 00:04:10.110 TEST_HEADER include/spdk/vmd.h 00:04:10.110 TEST_HEADER include/spdk/xor.h 00:04:10.110 TEST_HEADER include/spdk/zipf.h 00:04:10.110 CXX test/cpp_headers/accel.o 00:04:10.110 LINK rpc_client_test 00:04:10.110 LINK nvmf_tgt 00:04:10.110 LINK spdk_trace_record 00:04:10.110 LINK zipf 00:04:10.368 LINK poller_perf 00:04:10.368 LINK spdk_trace 00:04:10.368 LINK bdev_svc 00:04:10.368 CXX test/cpp_headers/accel_module.o 00:04:10.626 CC examples/ioat/perf/perf.o 00:04:10.626 LINK test_dma 00:04:10.626 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:10.626 CXX test/cpp_headers/assert.o 00:04:10.626 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:10.884 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.884 LINK ioat_perf 00:04:10.884 CC examples/thread/thread/thread_ex.o 00:04:10.884 LINK interrupt_tgt 00:04:10.884 CXX test/cpp_headers/barrier.o 00:04:10.884 CC examples/sock/hello_world/hello_sock.o 00:04:11.141 LINK mem_callbacks 00:04:11.141 CC examples/ioat/verify/verify.o 00:04:11.141 LINK thread 00:04:11.141 LINK iscsi_tgt 00:04:11.141 CXX test/cpp_headers/base64.o 00:04:11.141 CC test/event/event_perf/event_perf.o 00:04:11.141 CC test/event/reactor/reactor.o 00:04:11.141 LINK nvme_fuzz 00:04:11.399 LINK hello_sock 00:04:11.399 CXX test/cpp_headers/bdev.o 00:04:11.399 LINK event_perf 00:04:11.399 LINK verify 00:04:11.399 CC test/env/vtophys/vtophys.o 00:04:11.399 CXX test/cpp_headers/bdev_module.o 00:04:11.399 LINK reactor 00:04:11.399 CXX test/cpp_headers/bdev_zone.o 00:04:11.399 CXX test/cpp_headers/bit_array.o 00:04:11.399 CXX test/cpp_headers/bit_pool.o 00:04:11.657 LINK vtophys 00:04:11.657 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.657 CC app/spdk_tgt/spdk_tgt.o 00:04:11.657 CXX test/cpp_headers/blob_bdev.o 00:04:11.657 CC test/event/reactor_perf/reactor_perf.o 00:04:11.915 CC test/nvme/aer/aer.o 00:04:11.915 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.915 CC test/accel/dif/dif.o 00:04:11.915 CC test/blobfs/mkfs/mkfs.o 00:04:11.915 LINK env_dpdk_post_init 00:04:11.915 CC test/lvol/esnap/esnap.o 00:04:12.172 LINK reactor_perf 00:04:12.172 LINK spdk_tgt 00:04:12.172 LINK aer 00:04:12.172 CXX test/cpp_headers/blobfs_bdev.o 00:04:12.172 LINK mkfs 00:04:12.429 CC test/env/memory/memory_ut.o 00:04:12.687 CXX test/cpp_headers/blobfs.o 00:04:12.687 CC test/event/app_repeat/app_repeat.o 00:04:12.687 CC test/nvme/reset/reset.o 00:04:12.687 LINK dif 00:04:12.687 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.946 CC app/spdk_lspci/spdk_lspci.o 00:04:12.946 LINK app_repeat 00:04:12.946 CXX test/cpp_headers/blob.o 00:04:12.946 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:13.205 LINK spdk_lspci 00:04:13.205 LINK reset 00:04:13.205 CXX test/cpp_headers/conf.o 00:04:13.205 CC test/event/scheduler/scheduler.o 00:04:13.205 CC app/spdk_nvme_perf/perf.o 00:04:13.463 LINK vhost_fuzz 00:04:13.463 CXX test/cpp_headers/config.o 00:04:13.463 CXX test/cpp_headers/cpuset.o 00:04:13.463 CC test/nvme/sgl/sgl.o 00:04:13.463 LINK scheduler 00:04:13.722 CC test/bdev/bdevio/bdevio.o 00:04:13.722 CC test/env/pci/pci_ut.o 00:04:13.722 CXX test/cpp_headers/crc16.o 00:04:13.980 LINK sgl 00:04:14.237 CC app/spdk_nvme_identify/identify.o 00:04:14.237 CXX test/cpp_headers/crc32.o 00:04:14.237 LINK memory_ut 00:04:14.237 LINK pci_ut 00:04:14.495 LINK bdevio 00:04:14.753 CXX test/cpp_headers/crc64.o 00:04:14.753 CC test/nvme/e2edp/nvme_dp.o 00:04:14.753 LINK spdk_nvme_perf 00:04:14.753 CXX test/cpp_headers/dif.o 00:04:14.753 LINK iscsi_fuzz 00:04:15.011 CC app/spdk_nvme_discover/discovery_aer.o 00:04:15.011 CC app/spdk_top/spdk_top.o 00:04:15.269 CXX test/cpp_headers/dma.o 00:04:15.269 CC test/app/histogram_perf/histogram_perf.o 00:04:15.269 LINK nvme_dp 00:04:15.269 CC app/vhost/vhost.o 00:04:15.269 LINK spdk_nvme_discover 00:04:15.527 CXX test/cpp_headers/endian.o 00:04:15.527 CC test/app/jsoncat/jsoncat.o 00:04:15.527 LINK histogram_perf 00:04:15.784 CC test/nvme/overhead/overhead.o 00:04:15.784 LINK vhost 00:04:15.784 LINK spdk_nvme_identify 00:04:15.784 CXX test/cpp_headers/env_dpdk.o 00:04:15.784 LINK jsoncat 00:04:16.042 CC app/spdk_dd/spdk_dd.o 00:04:16.042 CXX test/cpp_headers/env.o 00:04:16.300 CC app/fio/nvme/fio_plugin.o 00:04:16.300 CC test/app/stub/stub.o 00:04:16.557 LINK overhead 00:04:16.557 CC app/fio/bdev/fio_plugin.o 00:04:16.557 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.557 CXX test/cpp_headers/event.o 00:04:16.815 LINK lsvmd 00:04:16.815 LINK stub 00:04:16.815 CXX test/cpp_headers/fd_group.o 00:04:17.073 LINK spdk_top 00:04:17.073 LINK spdk_dd 00:04:17.331 CC test/nvme/err_injection/err_injection.o 00:04:17.331 CC examples/vmd/led/led.o 00:04:17.331 CXX test/cpp_headers/fd.o 00:04:17.589 LINK spdk_nvme 00:04:17.589 CC test/nvme/startup/startup.o 00:04:17.589 LINK led 00:04:17.589 CXX test/cpp_headers/file.o 00:04:17.589 LINK spdk_bdev 00:04:17.846 LINK err_injection 00:04:17.846 CC test/nvme/reserve/reserve.o 00:04:17.846 CC examples/idxd/perf/perf.o 00:04:17.846 CC test/nvme/simple_copy/simple_copy.o 00:04:17.846 LINK startup 00:04:18.104 CXX test/cpp_headers/ftl.o 00:04:18.104 CXX test/cpp_headers/gpt_spec.o 00:04:18.104 CC test/nvme/connect_stress/connect_stress.o 00:04:18.104 LINK reserve 00:04:18.362 LINK simple_copy 00:04:18.362 CC test/nvme/boot_partition/boot_partition.o 00:04:18.362 CC examples/accel/perf/accel_perf.o 00:04:18.362 CXX test/cpp_headers/hexlify.o 00:04:18.362 LINK idxd_perf 00:04:18.620 LINK connect_stress 00:04:18.620 LINK boot_partition 00:04:18.620 CC test/nvme/compliance/nvme_compliance.o 00:04:18.878 CC examples/nvme/hello_world/hello_world.o 00:04:18.878 CC examples/blob/hello_world/hello_blob.o 00:04:18.878 CXX test/cpp_headers/histogram_data.o 00:04:18.878 CC examples/nvme/reconnect/reconnect.o 00:04:18.878 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.135 LINK hello_world 00:04:19.135 LINK nvme_compliance 00:04:19.135 CXX test/cpp_headers/idxd.o 00:04:19.135 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.135 LINK hello_blob 00:04:19.135 LINK accel_perf 00:04:19.393 LINK fused_ordering 00:04:19.393 LINK reconnect 00:04:19.393 CC examples/blob/cli/blobcli.o 00:04:19.393 CXX test/cpp_headers/idxd_spec.o 00:04:19.651 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.651 CC test/nvme/fdp/fdp.o 00:04:19.651 CXX test/cpp_headers/init.o 00:04:19.651 CXX test/cpp_headers/ioat.o 00:04:19.651 CC examples/nvme/arbitration/arbitration.o 00:04:19.651 CXX test/cpp_headers/ioat_spec.o 00:04:19.909 LINK doorbell_aers 00:04:19.909 CXX test/cpp_headers/iscsi_spec.o 00:04:19.909 CC test/nvme/cuse/cuse.o 00:04:19.909 LINK nvme_manage 00:04:20.166 LINK fdp 00:04:20.166 CXX test/cpp_headers/json.o 00:04:20.166 LINK arbitration 00:04:20.425 LINK esnap 00:04:20.425 LINK blobcli 00:04:20.425 CC examples/nvme/hotplug/hotplug.o 00:04:20.425 CXX test/cpp_headers/jsonrpc.o 00:04:20.425 CXX test/cpp_headers/keyring.o 00:04:20.425 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.425 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.683 CC examples/nvme/abort/abort.o 00:04:20.683 LINK cmb_copy 00:04:20.941 CXX test/cpp_headers/keyring_module.o 00:04:20.942 CXX test/cpp_headers/likely.o 00:04:20.942 LINK hotplug 00:04:20.942 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.942 CXX test/cpp_headers/log.o 00:04:20.942 LINK hello_bdev 00:04:20.942 CXX test/cpp_headers/lvol.o 00:04:20.942 CXX test/cpp_headers/memory.o 00:04:21.201 CXX test/cpp_headers/mmio.o 00:04:21.201 CXX test/cpp_headers/nbd.o 00:04:21.201 CXX test/cpp_headers/net.o 00:04:21.201 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.201 CXX test/cpp_headers/notify.o 00:04:21.201 CXX test/cpp_headers/nvme.o 00:04:21.459 CXX test/cpp_headers/nvme_intel.o 00:04:21.459 CXX test/cpp_headers/nvme_ocssd.o 00:04:21.459 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:21.459 LINK abort 00:04:21.459 CXX test/cpp_headers/nvme_spec.o 00:04:21.459 LINK pmr_persistence 00:04:21.459 CXX test/cpp_headers/nvme_zns.o 00:04:21.717 CXX test/cpp_headers/nvmf_cmd.o 00:04:21.717 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:21.717 CXX test/cpp_headers/nvmf.o 00:04:21.717 CXX test/cpp_headers/nvmf_spec.o 00:04:21.717 CXX test/cpp_headers/nvmf_transport.o 00:04:21.717 CXX test/cpp_headers/opal.o 00:04:21.717 CXX test/cpp_headers/opal_spec.o 00:04:21.717 CXX test/cpp_headers/pci_ids.o 00:04:21.974 LINK bdevperf 00:04:21.974 CXX test/cpp_headers/pipe.o 00:04:21.974 CXX test/cpp_headers/queue.o 00:04:21.974 CXX test/cpp_headers/reduce.o 00:04:21.974 CXX test/cpp_headers/rpc.o 00:04:21.974 CXX test/cpp_headers/scheduler.o 00:04:21.974 CXX test/cpp_headers/scsi.o 00:04:21.974 CXX test/cpp_headers/scsi_spec.o 00:04:21.974 LINK cuse 00:04:21.974 CXX test/cpp_headers/sock.o 00:04:21.974 CXX test/cpp_headers/stdinc.o 00:04:21.974 CXX test/cpp_headers/string.o 00:04:21.974 CXX test/cpp_headers/thread.o 00:04:22.232 CXX test/cpp_headers/trace.o 00:04:22.232 CXX test/cpp_headers/trace_parser.o 00:04:22.232 CXX test/cpp_headers/tree.o 00:04:22.232 CXX test/cpp_headers/ublk.o 00:04:22.232 CXX test/cpp_headers/util.o 00:04:22.232 CXX test/cpp_headers/uuid.o 00:04:22.232 CXX test/cpp_headers/version.o 00:04:22.232 CXX test/cpp_headers/vfio_user_pci.o 00:04:22.232 CXX test/cpp_headers/vfio_user_spec.o 00:04:22.232 CXX test/cpp_headers/vhost.o 00:04:22.232 CXX test/cpp_headers/vmd.o 00:04:22.490 CC examples/nvmf/nvmf/nvmf.o 00:04:22.490 CXX test/cpp_headers/xor.o 00:04:22.490 CXX test/cpp_headers/zipf.o 00:04:22.747 LINK nvmf 00:04:23.004 00:04:23.004 real 1m26.497s 00:04:23.004 user 9m53.822s 00:04:23.004 sys 1m59.501s 00:04:23.004 14:44:01 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:23.004 14:44:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.004 ************************************ 00:04:23.004 END TEST make 00:04:23.004 ************************************ 00:04:23.004 14:44:01 -- common/autotest_common.sh@1142 -- $ return 0 00:04:23.004 14:44:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.004 14:44:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.004 14:44:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.004 14:44:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.005 14:44:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.005 14:44:01 -- pm/common@44 -- $ pid=5189 00:04:23.005 14:44:01 -- pm/common@50 -- $ kill -TERM 5189 00:04:23.005 14:44:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.005 14:44:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.005 14:44:01 -- pm/common@44 -- $ pid=5191 00:04:23.005 14:44:01 -- pm/common@50 -- $ kill -TERM 5191 00:04:23.005 14:44:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.005 14:44:01 -- nvmf/common.sh@7 -- # uname -s 00:04:23.005 14:44:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.005 14:44:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.005 14:44:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.005 14:44:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.005 14:44:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.005 14:44:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.005 14:44:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.005 14:44:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.005 14:44:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.005 14:44:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.262 14:44:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:04:23.262 14:44:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:04:23.262 14:44:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.262 14:44:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.262 14:44:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:23.262 14:44:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.262 14:44:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.262 14:44:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.262 14:44:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.262 14:44:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.262 14:44:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.262 14:44:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.262 14:44:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.262 14:44:01 -- paths/export.sh@5 -- # export PATH 00:04:23.262 14:44:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.262 14:44:01 -- nvmf/common.sh@47 -- # : 0 00:04:23.262 14:44:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:23.262 14:44:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:23.262 14:44:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.262 14:44:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.262 14:44:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.262 14:44:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:23.262 14:44:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:23.262 14:44:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:23.262 14:44:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.262 14:44:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.262 14:44:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.262 14:44:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.262 14:44:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.262 14:44:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.262 14:44:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.262 14:44:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.262 14:44:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.262 14:44:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.262 14:44:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54713 00:04:23.262 14:44:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.262 14:44:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.262 14:44:01 -- pm/common@17 -- # local monitor 00:04:23.262 14:44:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.262 14:44:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.262 14:44:01 -- pm/common@21 -- # date +%s 00:04:23.262 14:44:01 -- pm/common@25 -- # sleep 1 00:04:23.263 14:44:01 -- pm/common@21 -- # date +%s 00:04:23.263 14:44:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720795441 00:04:23.263 14:44:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720795441 00:04:23.263 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720795441_collect-vmstat.pm.log 00:04:23.263 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720795441_collect-cpu-load.pm.log 00:04:24.196 14:44:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.196 14:44:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.196 14:44:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.196 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:04:24.196 14:44:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.196 14:44:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:24.196 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:04:24.196 14:44:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.196 14:44:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.196 14:44:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.196 14:44:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.196 14:44:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.196 14:44:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.196 14:44:02 -- common/autotest_common.sh@1455 -- # uname 00:04:24.196 14:44:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:24.196 14:44:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.196 14:44:02 -- common/autotest_common.sh@1475 -- # uname 00:04:24.196 14:44:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:24.196 14:44:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:24.196 14:44:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:24.196 14:44:02 -- spdk/autotest.sh@72 -- # hash lcov 00:04:24.196 14:44:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:24.196 14:44:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:24.196 --rc lcov_branch_coverage=1 00:04:24.196 --rc lcov_function_coverage=1 00:04:24.196 --rc genhtml_branch_coverage=1 00:04:24.196 --rc genhtml_function_coverage=1 00:04:24.196 --rc genhtml_legend=1 00:04:24.196 --rc geninfo_all_blocks=1 00:04:24.196 ' 00:04:24.196 14:44:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:24.196 --rc lcov_branch_coverage=1 00:04:24.196 --rc lcov_function_coverage=1 00:04:24.196 --rc genhtml_branch_coverage=1 00:04:24.196 --rc genhtml_function_coverage=1 00:04:24.196 --rc genhtml_legend=1 00:04:24.196 --rc geninfo_all_blocks=1 00:04:24.196 ' 00:04:24.196 14:44:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:24.196 --rc lcov_branch_coverage=1 00:04:24.196 --rc lcov_function_coverage=1 00:04:24.196 --rc genhtml_branch_coverage=1 00:04:24.196 --rc genhtml_function_coverage=1 00:04:24.196 --rc genhtml_legend=1 00:04:24.196 --rc geninfo_all_blocks=1 00:04:24.196 --no-external' 00:04:24.196 14:44:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:24.196 --rc lcov_branch_coverage=1 00:04:24.196 --rc lcov_function_coverage=1 00:04:24.196 --rc genhtml_branch_coverage=1 00:04:24.196 --rc genhtml_function_coverage=1 00:04:24.196 --rc genhtml_legend=1 00:04:24.196 --rc geninfo_all_blocks=1 00:04:24.196 --no-external' 00:04:24.196 14:44:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:24.454 lcov: LCOV version 1.14 00:04:24.454 14:44:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:42.529 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:42.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:57.528 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:57.528 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:57.529 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:57.529 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:59.435 14:44:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:59.435 14:44:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.435 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:59.435 14:44:37 -- spdk/autotest.sh@91 -- # rm -f 00:04:59.435 14:44:37 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.003 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:00.003 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:00.003 14:44:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:00.003 14:44:38 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:00.003 14:44:38 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:00.003 14:44:38 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:00.003 14:44:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.003 14:44:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:00.003 14:44:38 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:00.003 14:44:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.003 14:44:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:00.003 14:44:38 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:00.003 14:44:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.003 14:44:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:00.003 14:44:38 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:00.003 14:44:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.003 14:44:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:00.003 14:44:38 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:00.003 14:44:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.003 14:44:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.003 14:44:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:00.003 14:44:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.003 14:44:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:00.003 14:44:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:00.003 14:44:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:00.003 14:44:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.261 No valid GPT data, bailing 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # pt= 00:05:00.261 14:44:38 -- scripts/common.sh@392 -- # return 1 00:05:00.261 14:44:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.261 1+0 records in 00:05:00.261 1+0 records out 00:05:00.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465072 s, 225 MB/s 00:05:00.261 14:44:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.261 14:44:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:00.261 14:44:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:00.261 14:44:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:00.261 14:44:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:00.261 No valid GPT data, bailing 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # pt= 00:05:00.261 14:44:38 -- scripts/common.sh@392 -- # return 1 00:05:00.261 14:44:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:00.261 1+0 records in 00:05:00.261 1+0 records out 00:05:00.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004437 s, 236 MB/s 00:05:00.261 14:44:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.261 14:44:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:00.261 14:44:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:00.261 14:44:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:00.261 14:44:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:00.261 No valid GPT data, bailing 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:00.261 14:44:38 -- scripts/common.sh@391 -- # pt= 00:05:00.261 14:44:38 -- scripts/common.sh@392 -- # return 1 00:05:00.261 14:44:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:00.261 1+0 records in 00:05:00.261 1+0 records out 00:05:00.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428155 s, 245 MB/s 00:05:00.261 14:44:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.261 14:44:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:00.261 14:44:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:00.261 14:44:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:00.261 14:44:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:00.520 No valid GPT data, bailing 00:05:00.520 14:44:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:00.520 14:44:38 -- scripts/common.sh@391 -- # pt= 00:05:00.520 14:44:38 -- scripts/common.sh@392 -- # return 1 00:05:00.520 14:44:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:00.520 1+0 records in 00:05:00.520 1+0 records out 00:05:00.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421237 s, 249 MB/s 00:05:00.520 14:44:38 -- spdk/autotest.sh@118 -- # sync 00:05:00.520 14:44:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.520 14:44:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.520 14:44:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.421 14:44:40 -- spdk/autotest.sh@124 -- # uname -s 00:05:02.421 14:44:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:02.421 14:44:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:02.421 14:44:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.421 14:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.421 14:44:40 -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 ************************************ 00:05:02.421 START TEST setup.sh 00:05:02.421 ************************************ 00:05:02.421 14:44:40 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:02.421 * Looking for test storage... 00:05:02.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:02.421 14:44:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:02.421 14:44:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:02.421 14:44:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:02.421 14:44:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.421 14:44:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.421 14:44:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 ************************************ 00:05:02.421 START TEST acl 00:05:02.421 ************************************ 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:02.421 * Looking for test storage... 00:05:02.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:02.421 14:44:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:02.421 14:44:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:02.421 14:44:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.421 14:44:40 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.988 14:44:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:02.988 14:44:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:02.988 14:44:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.988 14:44:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:02.988 14:44:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.988 14:44:41 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.580 Hugepages 00:05:03.580 node hugesize free / total 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.580 00:05:03.580 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:03.580 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:03.838 14:44:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:03.838 14:44:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.838 14:44:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.838 14:44:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.838 ************************************ 00:05:03.838 START TEST denied 00:05:03.838 ************************************ 00:05:03.838 14:44:42 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:03.838 14:44:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:03.838 14:44:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:03.838 14:44:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:03.838 14:44:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.838 14:44:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.772 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.772 14:44:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.337 00:05:05.337 real 0m1.381s 00:05:05.337 user 0m0.565s 00:05:05.337 sys 0m0.766s 00:05:05.337 ************************************ 00:05:05.337 END TEST denied 00:05:05.337 ************************************ 00:05:05.337 14:44:43 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.337 14:44:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:05.337 14:44:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:05.337 14:44:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:05.337 14:44:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.337 14:44:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.337 14:44:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:05.337 ************************************ 00:05:05.337 START TEST allowed 00:05:05.337 ************************************ 00:05:05.337 14:44:43 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:05.337 14:44:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:05.337 14:44:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:05.337 14:44:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:05.337 14:44:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.337 14:44:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.270 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.270 14:44:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.835 00:05:06.835 real 0m1.510s 00:05:06.835 user 0m0.645s 00:05:06.835 sys 0m0.833s 00:05:06.835 14:44:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.835 14:44:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 ************************************ 00:05:06.835 END TEST allowed 00:05:06.835 ************************************ 00:05:06.835 14:44:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:06.835 00:05:06.835 real 0m4.581s 00:05:06.835 user 0m1.995s 00:05:06.835 sys 0m2.511s 00:05:06.835 14:44:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.835 14:44:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 ************************************ 00:05:06.835 END TEST acl 00:05:06.835 ************************************ 00:05:06.835 14:44:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:06.835 14:44:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:06.835 14:44:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.835 14:44:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.835 14:44:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.835 ************************************ 00:05:06.835 START TEST hugepages 00:05:06.835 ************************************ 00:05:06.835 14:44:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:07.093 * Looking for test storage... 00:05:07.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5886620 kB' 'MemAvailable: 7398476 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 476848 kB' 'Inactive: 1353028 kB' 'Active(anon): 114648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 105764 kB' 'Mapped: 48724 kB' 'Shmem: 10488 kB' 'KReclaimable: 67136 kB' 'Slab: 147508 kB' 'SReclaimable: 67136 kB' 'SUnreclaim: 80372 kB' 'KernelStack: 6364 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 333408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.093 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.094 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.095 14:44:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:07.095 14:44:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.095 14:44:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.095 14:44:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 ************************************ 00:05:07.095 START TEST default_setup 00:05:07.095 ************************************ 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.095 14:44:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.661 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.661 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7983324 kB' 'MemAvailable: 9494996 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493204 kB' 'Inactive: 1353040 kB' 'Active(anon): 131004 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122444 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146984 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80240 kB' 'KernelStack: 6304 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.924 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.925 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7984360 kB' 'MemAvailable: 9496032 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493468 kB' 'Inactive: 1353040 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146972 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80228 kB' 'KernelStack: 6272 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.926 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.927 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7984632 kB' 'MemAvailable: 9496304 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493400 kB' 'Inactive: 1353040 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'KernelStack: 6256 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.928 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:07.929 nr_hugepages=1024 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.929 resv_hugepages=0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.929 surplus_hugepages=0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.929 anon_hugepages=0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.929 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7984384 kB' 'MemAvailable: 9496056 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493432 kB' 'Inactive: 1353040 kB' 'Active(anon): 131232 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122372 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'KernelStack: 6272 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.930 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7984384 kB' 'MemUsed: 4257592 kB' 'SwapCached: 0 kB' 'Active: 493520 kB' 'Inactive: 1353048 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1725712 kB' 'Mapped: 48728 kB' 'AnonPages: 122452 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.931 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.932 node0=1024 expecting 1024 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.932 00:05:07.932 real 0m0.934s 00:05:07.932 user 0m0.437s 00:05:07.932 sys 0m0.458s 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.932 14:44:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:07.932 ************************************ 00:05:07.932 END TEST default_setup 00:05:07.932 ************************************ 00:05:07.932 14:44:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:07.932 14:44:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:07.932 14:44:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.932 14:44:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.932 14:44:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.932 ************************************ 00:05:07.932 START TEST per_node_1G_alloc 00:05:07.932 ************************************ 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.932 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.551 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.551 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9033428 kB' 'MemAvailable: 10545108 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493864 kB' 'Inactive: 1353048 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146952 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80208 kB' 'KernelStack: 6260 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.551 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9033428 kB' 'MemAvailable: 10545108 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 1353048 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122116 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80216 kB' 'KernelStack: 6272 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.552 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9033428 kB' 'MemAvailable: 10545108 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493656 kB' 'Inactive: 1353048 kB' 'Active(anon): 131456 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80216 kB' 'KernelStack: 6256 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.553 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.554 nr_hugepages=512 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:08.554 resv_hugepages=0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.554 surplus_hugepages=0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.554 anon_hugepages=0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.554 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.555 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.555 14:44:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9033428 kB' 'MemAvailable: 10545108 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493652 kB' 'Inactive: 1353048 kB' 'Active(anon): 131452 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'KernelStack: 6256 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.555 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9033428 kB' 'MemUsed: 3208548 kB' 'SwapCached: 0 kB' 'Active: 493248 kB' 'Inactive: 1353048 kB' 'Active(anon): 131048 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1725712 kB' 'Mapped: 48728 kB' 'AnonPages: 122388 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146956 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.556 node0=512 expecting 512 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:08.556 00:05:08.556 real 0m0.515s 00:05:08.556 user 0m0.244s 00:05:08.556 sys 0m0.284s 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.556 14:44:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.556 ************************************ 00:05:08.556 END TEST per_node_1G_alloc 00:05:08.556 ************************************ 00:05:08.557 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:08.557 14:44:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:08.557 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.557 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.557 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.557 ************************************ 00:05:08.557 START TEST even_2G_alloc 00:05:08.557 ************************************ 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.557 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.814 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.814 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7992260 kB' 'MemAvailable: 9503940 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493624 kB' 'Inactive: 1353048 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146992 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80248 kB' 'KernelStack: 6276 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.814 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:08.815 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.077 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7992008 kB' 'MemAvailable: 9503688 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493456 kB' 'Inactive: 1353048 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 147004 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 6272 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.078 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.079 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7992008 kB' 'MemAvailable: 9503688 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493204 kB' 'Inactive: 1353048 kB' 'Active(anon): 131004 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146996 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80252 kB' 'KernelStack: 6272 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.080 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.081 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.082 nr_hugepages=1024 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.082 resv_hugepages=0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.082 surplus_hugepages=0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.082 anon_hugepages=0 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7992008 kB' 'MemAvailable: 9503688 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493252 kB' 'Inactive: 1353048 kB' 'Active(anon): 131052 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146996 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80252 kB' 'KernelStack: 6256 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.082 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.083 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7991756 kB' 'MemUsed: 4250220 kB' 'SwapCached: 0 kB' 'Active: 493380 kB' 'Inactive: 1353048 kB' 'Active(anon): 131180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1725712 kB' 'Mapped: 48988 kB' 'AnonPages: 122588 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146984 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.084 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.085 node0=1024 expecting 1024 00:05:09.085 ************************************ 00:05:09.085 END TEST even_2G_alloc 00:05:09.085 ************************************ 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.085 00:05:09.085 real 0m0.508s 00:05:09.085 user 0m0.238s 00:05:09.085 sys 0m0.277s 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.085 14:44:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.085 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.085 14:44:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:09.085 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.085 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.085 14:44:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.085 ************************************ 00:05:09.085 START TEST odd_alloc 00:05:09.085 ************************************ 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.085 14:44:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.343 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.343 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980656 kB' 'MemAvailable: 9492336 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1353048 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6244 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.604 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.605 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980656 kB' 'MemAvailable: 9492336 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493140 kB' 'Inactive: 1353048 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6256 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.606 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980656 kB' 'MemAvailable: 9492336 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493244 kB' 'Inactive: 1353048 kB' 'Active(anon): 131044 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122412 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6272 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.607 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.608 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:09.609 nr_hugepages=1025 00:05:09.609 resv_hugepages=0 00:05:09.609 surplus_hugepages=0 00:05:09.609 anon_hugepages=0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980656 kB' 'MemAvailable: 9492336 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493164 kB' 'Inactive: 1353048 kB' 'Active(anon): 130964 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.609 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.610 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7980656 kB' 'MemUsed: 4261320 kB' 'SwapCached: 0 kB' 'Active: 493268 kB' 'Inactive: 1353048 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1725712 kB' 'Mapped: 48728 kB' 'AnonPages: 122460 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.611 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.612 node0=1025 expecting 1025 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:09.612 00:05:09.612 real 0m0.527s 00:05:09.612 user 0m0.258s 00:05:09.612 sys 0m0.274s 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.612 ************************************ 00:05:09.612 END TEST odd_alloc 00:05:09.612 ************************************ 00:05:09.612 14:44:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.612 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.612 14:44:48 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:09.612 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.612 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.612 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.612 ************************************ 00:05:09.612 START TEST custom_alloc 00:05:09.612 ************************************ 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.612 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.613 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.183 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.183 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030088 kB' 'MemAvailable: 10541768 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 494072 kB' 'Inactive: 1353048 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146992 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80248 kB' 'KernelStack: 6268 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.183 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.184 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030088 kB' 'MemAvailable: 10541764 kB' 'Buffers: 2436 kB' 'Cached: 1723272 kB' 'SwapCached: 0 kB' 'Active: 493640 kB' 'Inactive: 1353044 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122592 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146992 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80248 kB' 'KernelStack: 6236 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.185 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.186 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030088 kB' 'MemAvailable: 10541768 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493520 kB' 'Inactive: 1353048 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146976 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80232 kB' 'KernelStack: 6188 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.188 nr_hugepages=512 00:05:10.188 resv_hugepages=0 00:05:10.188 surplus_hugepages=0 00:05:10.188 anon_hugepages=0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.188 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030088 kB' 'MemAvailable: 10541768 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493528 kB' 'Inactive: 1353048 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6204 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.189 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9030088 kB' 'MemUsed: 3211888 kB' 'SwapCached: 0 kB' 'Active: 493364 kB' 'Inactive: 1353048 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1725712 kB' 'Mapped: 48728 kB' 'AnonPages: 122620 kB' 'Shmem: 10464 kB' 'KernelStack: 6264 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.190 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.191 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.192 node0=512 expecting 512 00:05:10.192 ************************************ 00:05:10.192 END TEST custom_alloc 00:05:10.192 ************************************ 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:10.192 00:05:10.192 real 0m0.509s 00:05:10.192 user 0m0.259s 00:05:10.192 sys 0m0.261s 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.192 14:44:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.192 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:10.192 14:44:48 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:10.192 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.192 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.192 14:44:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.192 ************************************ 00:05:10.192 START TEST no_shrink_alloc 00:05:10.192 ************************************ 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.192 14:44:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.450 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.450 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976000 kB' 'MemAvailable: 9487680 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 494188 kB' 'Inactive: 1353048 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146976 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80232 kB' 'KernelStack: 6292 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.711 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976000 kB' 'MemAvailable: 9487680 kB' 'Buffers: 2436 kB' 'Cached: 1723276 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1353048 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146980 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80236 kB' 'KernelStack: 6272 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.712 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976000 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 493276 kB' 'Inactive: 1353052 kB' 'Active(anon): 131076 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122304 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6288 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.713 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.714 nr_hugepages=1024 00:05:10.714 resv_hugepages=0 00:05:10.714 surplus_hugepages=0 00:05:10.714 anon_hugepages=0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976000 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 493300 kB' 'Inactive: 1353052 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80220 kB' 'KernelStack: 6304 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.714 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976000 kB' 'MemUsed: 4265976 kB' 'SwapCached: 0 kB' 'Active: 493516 kB' 'Inactive: 1353052 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1725716 kB' 'Mapped: 48600 kB' 'AnonPages: 122252 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146956 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.715 node0=1024 expecting 1024 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.715 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:10.716 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:10.716 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:10.716 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.716 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.973 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.973 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.973 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.973 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977328 kB' 'MemAvailable: 9489012 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 494128 kB' 'Inactive: 1353052 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 49084 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146928 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80184 kB' 'KernelStack: 6324 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.974 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.235 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977724 kB' 'MemAvailable: 9489412 kB' 'Buffers: 2436 kB' 'Cached: 1723284 kB' 'SwapCached: 0 kB' 'Active: 493636 kB' 'Inactive: 1353056 kB' 'Active(anon): 131436 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146928 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80184 kB' 'KernelStack: 6288 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.236 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977864 kB' 'MemAvailable: 9489548 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 493224 kB' 'Inactive: 1353052 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146924 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80180 kB' 'KernelStack: 6288 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.237 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.238 nr_hugepages=1024 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.238 resv_hugepages=0 00:05:11.238 surplus_hugepages=0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.238 anon_hugepages=0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978116 kB' 'MemAvailable: 9489800 kB' 'Buffers: 2436 kB' 'Cached: 1723280 kB' 'SwapCached: 0 kB' 'Active: 493224 kB' 'Inactive: 1353052 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 146900 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80156 kB' 'KernelStack: 6272 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7978768 kB' 'MemUsed: 4263208 kB' 'SwapCached: 0 kB' 'Active: 493456 kB' 'Inactive: 1353052 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1725716 kB' 'Mapped: 48728 kB' 'AnonPages: 122392 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 146896 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 80152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.238 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.239 node0=1024 expecting 1024 00:05:11.239 ************************************ 00:05:11.239 END TEST no_shrink_alloc 00:05:11.239 ************************************ 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.239 00:05:11.239 real 0m1.026s 00:05:11.239 user 0m0.502s 00:05:11.239 sys 0m0.522s 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.239 14:44:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.239 14:44:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:11.239 14:44:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:11.239 ************************************ 00:05:11.239 END TEST hugepages 00:05:11.239 ************************************ 00:05:11.239 00:05:11.239 real 0m4.428s 00:05:11.239 user 0m2.099s 00:05:11.239 sys 0m2.319s 00:05:11.239 14:44:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.239 14:44:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.497 14:44:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:11.497 14:44:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:11.497 14:44:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.497 14:44:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.497 14:44:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:11.497 ************************************ 00:05:11.497 START TEST driver 00:05:11.497 ************************************ 00:05:11.497 14:44:49 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:11.497 * Looking for test storage... 00:05:11.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:11.497 14:44:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:11.497 14:44:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.497 14:44:49 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.064 14:44:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:12.064 14:44:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.064 14:44:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.064 14:44:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:12.064 ************************************ 00:05:12.064 START TEST guess_driver 00:05:12.064 ************************************ 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:12.064 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:12.064 Looking for driver=uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.064 14:44:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.631 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.889 14:44:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.457 00:05:13.457 real 0m1.491s 00:05:13.457 user 0m0.539s 00:05:13.457 sys 0m0.913s 00:05:13.457 14:44:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.457 ************************************ 00:05:13.457 END TEST guess_driver 00:05:13.457 ************************************ 00:05:13.457 14:44:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.457 14:44:52 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:13.457 ************************************ 00:05:13.457 END TEST driver 00:05:13.457 ************************************ 00:05:13.457 00:05:13.457 real 0m2.139s 00:05:13.457 user 0m0.774s 00:05:13.457 sys 0m1.373s 00:05:13.457 14:44:52 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.457 14:44:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.457 14:44:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:13.457 14:44:52 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:13.457 14:44:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.457 14:44:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.457 14:44:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.457 ************************************ 00:05:13.457 START TEST devices 00:05:13.457 ************************************ 00:05:13.457 14:44:52 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:13.716 * Looking for test storage... 00:05:13.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.716 14:44:52 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:13.716 14:44:52 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:13.716 14:44:52 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.716 14:44:52 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.283 14:44:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:14.283 No valid GPT data, bailing 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:14.283 14:44:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:14.283 14:44:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:14.283 14:44:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:14.283 14:44:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:14.541 No valid GPT data, bailing 00:05:14.541 14:44:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:14.541 14:44:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:14.541 14:44:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:14.541 14:44:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:14.541 14:44:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:14.541 14:44:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:14.541 14:44:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:14.541 No valid GPT data, bailing 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:14.541 No valid GPT data, bailing 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:14.541 14:44:53 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:14.541 14:44:53 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:14.541 14:44:53 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:14.541 14:44:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.541 14:44:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.541 14:44:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:14.541 ************************************ 00:05:14.541 START TEST nvme_mount 00:05:14.541 ************************************ 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.541 14:44:53 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:15.914 Creating new GPT entries in memory. 00:05:15.914 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:15.914 other utilities. 00:05:15.914 14:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:15.914 14:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.914 14:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.914 14:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.914 14:44:54 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:16.848 Creating new GPT entries in memory. 00:05:16.848 The operation has completed successfully. 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58957 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:16.848 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.106 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.106 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.365 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:17.365 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:17.365 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.365 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:17.365 14:44:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.365 14:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.623 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.881 14:44:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.140 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:18.398 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.399 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.399 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.399 14:44:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.399 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.399 00:05:18.399 real 0m3.821s 00:05:18.399 user 0m0.657s 00:05:18.399 sys 0m0.919s 00:05:18.399 14:44:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.399 14:44:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:18.399 ************************************ 00:05:18.399 END TEST nvme_mount 00:05:18.399 ************************************ 00:05:18.399 14:44:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:18.399 14:44:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:18.399 14:44:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.399 14:44:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.399 14:44:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.399 ************************************ 00:05:18.399 START TEST dm_mount 00:05:18.399 ************************************ 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.399 14:44:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:19.401 Creating new GPT entries in memory. 00:05:19.401 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.401 other utilities. 00:05:19.401 14:44:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.401 14:44:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.401 14:44:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.401 14:44:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.401 14:44:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:20.774 Creating new GPT entries in memory. 00:05:20.774 The operation has completed successfully. 00:05:20.774 14:44:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.774 14:44:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.774 14:44:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.774 14:44:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.774 14:44:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:21.709 The operation has completed successfully. 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59390 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.709 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.968 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.226 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.484 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.484 14:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.484 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.742 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.742 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.742 14:45:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.742 00:05:22.742 real 0m4.141s 00:05:22.742 user 0m0.428s 00:05:22.742 sys 0m0.690s 00:05:22.742 14:45:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.742 14:45:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.742 ************************************ 00:05:22.742 END TEST dm_mount 00:05:22.743 ************************************ 00:05:22.743 14:45:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.743 14:45:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.001 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.001 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.001 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.001 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.001 14:45:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.001 00:05:23.001 real 0m9.437s 00:05:23.001 user 0m1.693s 00:05:23.001 sys 0m2.175s 00:05:23.001 14:45:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.001 14:45:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.001 ************************************ 00:05:23.001 END TEST devices 00:05:23.001 ************************************ 00:05:23.001 14:45:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:23.001 ************************************ 00:05:23.001 END TEST setup.sh 00:05:23.001 ************************************ 00:05:23.001 00:05:23.001 real 0m20.845s 00:05:23.001 user 0m6.649s 00:05:23.001 sys 0m8.548s 00:05:23.001 14:45:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.001 14:45:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.001 14:45:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.001 14:45:01 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.568 Hugepages 00:05:23.568 node hugesize free / total 00:05:23.568 node0 1048576kB 0 / 0 00:05:23.568 node0 2048kB 2048 / 2048 00:05:23.568 00:05:23.568 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.827 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.827 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:23.827 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:23.827 14:45:02 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.827 14:45:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:23.827 14:45:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:23.827 14:45:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.694 14:45:03 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.629 14:45:04 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.629 14:45:04 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.629 14:45:04 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.629 14:45:04 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.629 14:45:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.629 14:45:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.629 14:45:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.629 14:45:04 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.629 14:45:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:25.888 14:45:04 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:25.888 14:45:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:25.888 14:45:04 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.146 Waiting for block devices as requested 00:05:26.146 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.404 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.404 14:45:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:26.404 14:45:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.404 14:45:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.404 14:45:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:26.404 14:45:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:26.404 14:45:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:26.404 14:45:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:26.404 14:45:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:26.404 14:45:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:26.404 14:45:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:26.404 14:45:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:26.404 14:45:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:26.404 14:45:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:26.404 14:45:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:26.404 14:45:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:26.404 14:45:04 -- common/autotest_common.sh@1557 -- # continue 00:05:26.405 14:45:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:26.405 14:45:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.405 14:45:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:26.405 14:45:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:26.405 14:45:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:26.405 14:45:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:26.405 14:45:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:26.405 14:45:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:26.405 14:45:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:26.405 14:45:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:26.405 14:45:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:26.405 14:45:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:26.405 14:45:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:26.405 14:45:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:26.405 14:45:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:26.405 14:45:04 -- common/autotest_common.sh@1557 -- # continue 00:05:26.405 14:45:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:26.405 14:45:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.405 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.405 14:45:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:26.405 14:45:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.405 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.405 14:45:04 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.229 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.229 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.229 14:45:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:27.229 14:45:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.229 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.229 14:45:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:27.229 14:45:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:27.229 14:45:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.229 14:45:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:27.229 14:45:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:27.229 14:45:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:27.229 14:45:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:27.229 14:45:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:27.229 14:45:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.229 14:45:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:27.229 14:45:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.229 14:45:05 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:27.229 14:45:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:27.229 14:45:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:27.229 14:45:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.229 14:45:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:27.229 14:45:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.229 14:45:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:27.229 14:45:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.229 14:45:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:27.229 14:45:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.230 14:45:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:27.230 14:45:05 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:27.230 14:45:05 -- common/autotest_common.sh@1593 -- # return 0 00:05:27.230 14:45:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:27.230 14:45:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:27.230 14:45:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.488 14:45:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.488 14:45:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:27.488 14:45:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.488 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.488 14:45:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:27.488 14:45:05 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.488 14:45:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.488 14:45:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.488 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.488 ************************************ 00:05:27.488 START TEST env 00:05:27.488 ************************************ 00:05:27.488 14:45:05 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.488 * Looking for test storage... 00:05:27.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.488 14:45:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.488 14:45:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.488 14:45:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.488 14:45:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.488 ************************************ 00:05:27.488 START TEST env_memory 00:05:27.488 ************************************ 00:05:27.488 14:45:05 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.488 00:05:27.488 00:05:27.488 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.488 http://cunit.sourceforge.net/ 00:05:27.488 00:05:27.488 00:05:27.488 Suite: memory 00:05:27.488 Test: alloc and free memory map ...[2024-07-12 14:45:06.027748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.488 passed 00:05:27.488 Test: mem map translation ...[2024-07-12 14:45:06.058888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.488 [2024-07-12 14:45:06.058963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.488 [2024-07-12 14:45:06.059019] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.488 [2024-07-12 14:45:06.059031] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.488 passed 00:05:27.488 Test: mem map registration ...[2024-07-12 14:45:06.123534] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:27.488 [2024-07-12 14:45:06.123601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:27.748 passed 00:05:27.748 Test: mem map adjacent registrations ...passed 00:05:27.748 00:05:27.748 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.748 suites 1 1 n/a 0 0 00:05:27.748 tests 4 4 4 0 0 00:05:27.748 asserts 152 152 152 0 n/a 00:05:27.748 00:05:27.748 Elapsed time = 0.215 seconds 00:05:27.748 00:05:27.748 real 0m0.232s 00:05:27.748 user 0m0.216s 00:05:27.748 sys 0m0.012s 00:05:27.748 14:45:06 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.748 14:45:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.748 ************************************ 00:05:27.748 END TEST env_memory 00:05:27.748 ************************************ 00:05:27.748 14:45:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:27.748 14:45:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.748 14:45:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.748 14:45:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.748 14:45:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.748 ************************************ 00:05:27.748 START TEST env_vtophys 00:05:27.748 ************************************ 00:05:27.748 14:45:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.748 EAL: lib.eal log level changed from notice to debug 00:05:27.748 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.748 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.748 EAL: Maximum logical cores by configuration: 128 00:05:27.748 EAL: Detected CPU lcores: 10 00:05:27.748 EAL: Detected NUMA nodes: 1 00:05:27.748 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:27.748 EAL: Detected shared linkage of DPDK 00:05:27.748 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.748 EAL: Selected IOVA mode 'PA' 00:05:27.748 EAL: Probing VFIO support... 00:05:27.748 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.748 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.748 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.748 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.748 EAL: Setting up physically contiguous memory... 00:05:27.748 EAL: Setting maximum number of open files to 524288 00:05:27.748 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.748 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.748 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.748 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.748 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.748 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.748 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.748 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.748 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.748 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.748 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.748 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.748 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.748 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.748 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.748 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.748 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.748 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.748 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.748 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.748 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.748 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.748 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.748 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.748 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.748 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.748 EAL: Hugepages will be freed exactly as allocated. 00:05:27.748 EAL: No shared files mode enabled, IPC is disabled 00:05:27.748 EAL: No shared files mode enabled, IPC is disabled 00:05:27.748 EAL: TSC frequency is ~2200000 KHz 00:05:27.748 EAL: Main lcore 0 is ready (tid=7fcba1628a00;cpuset=[0]) 00:05:27.748 EAL: Trying to obtain current memory policy. 00:05:27.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.748 EAL: Restoring previous memory policy: 0 00:05:27.748 EAL: request: mp_malloc_sync 00:05:27.748 EAL: No shared files mode enabled, IPC is disabled 00:05:27.748 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.748 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.007 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.007 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.007 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.007 00:05:28.007 00:05:28.007 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.007 http://cunit.sourceforge.net/ 00:05:28.007 00:05:28.007 00:05:28.007 Suite: components_suite 00:05:28.007 Test: vtophys_malloc_test ...passed 00:05:28.007 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.007 EAL: Restoring previous memory policy: 4 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.007 EAL: request: mp_malloc_sync 00:05:28.007 EAL: No shared files mode enabled, IPC is disabled 00:05:28.007 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.007 EAL: Trying to obtain current memory policy. 00:05:28.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.266 EAL: Restoring previous memory policy: 4 00:05:28.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.266 EAL: request: mp_malloc_sync 00:05:28.266 EAL: No shared files mode enabled, IPC is disabled 00:05:28.266 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.266 EAL: request: mp_malloc_sync 00:05:28.266 EAL: No shared files mode enabled, IPC is disabled 00:05:28.266 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.266 EAL: Trying to obtain current memory policy. 00:05:28.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.536 EAL: Restoring previous memory policy: 4 00:05:28.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.536 EAL: request: mp_malloc_sync 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.536 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.536 EAL: request: mp_malloc_sync 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.536 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.536 passed 00:05:28.536 00:05:28.536 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.536 suites 1 1 n/a 0 0 00:05:28.536 tests 2 2 2 0 0 00:05:28.536 asserts 5358 5358 5358 0 n/a 00:05:28.536 00:05:28.536 Elapsed time = 0.722 seconds 00:05:28.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.536 EAL: request: mp_malloc_sync 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.536 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.536 EAL: No shared files mode enabled, IPC is disabled 00:05:28.800 00:05:28.800 real 0m0.924s 00:05:28.800 user 0m0.459s 00:05:28.800 sys 0m0.327s 00:05:28.800 14:45:07 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.800 ************************************ 00:05:28.800 END TEST env_vtophys 00:05:28.800 14:45:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 ************************************ 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.800 14:45:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.800 14:45:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 ************************************ 00:05:28.800 START TEST env_pci 00:05:28.800 ************************************ 00:05:28.800 14:45:07 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.800 00:05:28.800 00:05:28.800 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.800 http://cunit.sourceforge.net/ 00:05:28.800 00:05:28.800 00:05:28.800 Suite: pci 00:05:28.800 Test: pci_hook ...[2024-07-12 14:45:07.247375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60573 has claimed it 00:05:28.800 passed 00:05:28.800 00:05:28.800 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.800 suites 1 1 n/a 0 0 00:05:28.800 tests 1 1 1 0 0 00:05:28.800 asserts 25 25 25 0 n/a 00:05:28.800 00:05:28.800 Elapsed time = 0.002 seconds 00:05:28.800 EAL: Cannot find device (10000:00:01.0) 00:05:28.800 EAL: Failed to attach device on primary process 00:05:28.800 00:05:28.800 real 0m0.020s 00:05:28.800 user 0m0.010s 00:05:28.800 sys 0m0.009s 00:05:28.800 14:45:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.800 14:45:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 ************************************ 00:05:28.800 END TEST env_pci 00:05:28.800 ************************************ 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.800 14:45:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.800 14:45:07 env -- env/env.sh@15 -- # uname 00:05:28.800 14:45:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.800 14:45:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.800 14:45:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:28.800 14:45:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.800 14:45:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 ************************************ 00:05:28.800 START TEST env_dpdk_post_init 00:05:28.800 ************************************ 00:05:28.800 14:45:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.800 EAL: Detected CPU lcores: 10 00:05:28.800 EAL: Detected NUMA nodes: 1 00:05:28.800 EAL: Detected shared linkage of DPDK 00:05:28.800 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.800 EAL: Selected IOVA mode 'PA' 00:05:28.800 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.058 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.058 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.058 Starting DPDK initialization... 00:05:29.058 Starting SPDK post initialization... 00:05:29.058 SPDK NVMe probe 00:05:29.058 Attaching to 0000:00:10.0 00:05:29.058 Attaching to 0000:00:11.0 00:05:29.058 Attached to 0000:00:10.0 00:05:29.058 Attached to 0000:00:11.0 00:05:29.058 Cleaning up... 00:05:29.058 00:05:29.058 real 0m0.184s 00:05:29.058 user 0m0.045s 00:05:29.058 sys 0m0.039s 00:05:29.058 14:45:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.058 14:45:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.058 ************************************ 00:05:29.058 END TEST env_dpdk_post_init 00:05:29.058 ************************************ 00:05:29.058 14:45:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.058 14:45:07 env -- env/env.sh@26 -- # uname 00:05:29.058 14:45:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.058 14:45:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.058 14:45:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.058 14:45:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.058 14:45:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.058 ************************************ 00:05:29.058 START TEST env_mem_callbacks 00:05:29.058 ************************************ 00:05:29.058 14:45:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.058 EAL: Detected CPU lcores: 10 00:05:29.058 EAL: Detected NUMA nodes: 1 00:05:29.058 EAL: Detected shared linkage of DPDK 00:05:29.058 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.058 EAL: Selected IOVA mode 'PA' 00:05:29.058 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.058 00:05:29.058 00:05:29.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.058 http://cunit.sourceforge.net/ 00:05:29.058 00:05:29.058 00:05:29.058 Suite: memory 00:05:29.058 Test: test ... 00:05:29.058 register 0x200000200000 2097152 00:05:29.058 malloc 3145728 00:05:29.058 register 0x200000400000 4194304 00:05:29.058 buf 0x200000500000 len 3145728 PASSED 00:05:29.058 malloc 64 00:05:29.058 buf 0x2000004fff40 len 64 PASSED 00:05:29.058 malloc 4194304 00:05:29.058 register 0x200000800000 6291456 00:05:29.058 buf 0x200000a00000 len 4194304 PASSED 00:05:29.058 free 0x200000500000 3145728 00:05:29.058 free 0x2000004fff40 64 00:05:29.058 unregister 0x200000400000 4194304 PASSED 00:05:29.058 free 0x200000a00000 4194304 00:05:29.058 unregister 0x200000800000 6291456 PASSED 00:05:29.058 malloc 8388608 00:05:29.058 register 0x200000400000 10485760 00:05:29.058 buf 0x200000600000 len 8388608 PASSED 00:05:29.058 free 0x200000600000 8388608 00:05:29.058 unregister 0x200000400000 10485760 PASSED 00:05:29.058 passed 00:05:29.058 00:05:29.058 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.058 suites 1 1 n/a 0 0 00:05:29.058 tests 1 1 1 0 0 00:05:29.058 asserts 15 15 15 0 n/a 00:05:29.058 00:05:29.059 Elapsed time = 0.006 seconds 00:05:29.059 00:05:29.059 real 0m0.137s 00:05:29.059 user 0m0.012s 00:05:29.059 sys 0m0.024s 00:05:29.059 14:45:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.059 14:45:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.059 ************************************ 00:05:29.059 END TEST env_mem_callbacks 00:05:29.059 ************************************ 00:05:29.323 14:45:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.323 00:05:29.323 real 0m1.819s 00:05:29.323 user 0m0.842s 00:05:29.323 sys 0m0.626s 00:05:29.323 14:45:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.323 14:45:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.323 ************************************ 00:05:29.323 END TEST env 00:05:29.323 ************************************ 00:05:29.323 14:45:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.323 14:45:07 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.323 14:45:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.323 14:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.323 14:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.323 ************************************ 00:05:29.323 START TEST rpc 00:05:29.323 ************************************ 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.323 * Looking for test storage... 00:05:29.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.323 14:45:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60682 00:05:29.323 14:45:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:29.323 14:45:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.323 14:45:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60682 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@829 -- # '[' -z 60682 ']' 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.323 14:45:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.323 [2024-07-12 14:45:07.926350] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:29.323 [2024-07-12 14:45:07.926452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60682 ] 00:05:29.581 [2024-07-12 14:45:08.069129] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.581 [2024-07-12 14:45:08.158257] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.581 [2024-07-12 14:45:08.158316] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60682' to capture a snapshot of events at runtime. 00:05:29.581 [2024-07-12 14:45:08.158330] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.581 [2024-07-12 14:45:08.158340] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.581 [2024-07-12 14:45:08.158349] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60682 for offline analysis/debug. 00:05:29.581 [2024-07-12 14:45:08.158386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.515 14:45:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.515 14:45:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.515 14:45:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.515 14:45:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.515 14:45:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.515 14:45:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.515 14:45:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.515 14:45:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.515 14:45:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.515 ************************************ 00:05:30.515 START TEST rpc_integrity 00:05:30.515 ************************************ 00:05:30.515 14:45:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.515 14:45:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.515 14:45:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.515 14:45:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.515 14:45:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.515 14:45:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.515 14:45:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.515 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.515 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.515 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.515 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.515 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.515 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.515 { 00:05:30.515 "aliases": [ 00:05:30.515 "09d65ceb-555b-48cc-a7f8-d7e54dd6dc13" 00:05:30.515 ], 00:05:30.515 "assigned_rate_limits": { 00:05:30.515 "r_mbytes_per_sec": 0, 00:05:30.515 "rw_ios_per_sec": 0, 00:05:30.515 "rw_mbytes_per_sec": 0, 00:05:30.515 "w_mbytes_per_sec": 0 00:05:30.515 }, 00:05:30.515 "block_size": 512, 00:05:30.515 "claimed": false, 00:05:30.515 "driver_specific": {}, 00:05:30.515 "memory_domains": [ 00:05:30.515 { 00:05:30.515 "dma_device_id": "system", 00:05:30.515 "dma_device_type": 1 00:05:30.515 }, 00:05:30.515 { 00:05:30.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.515 "dma_device_type": 2 00:05:30.515 } 00:05:30.515 ], 00:05:30.515 "name": "Malloc0", 00:05:30.515 "num_blocks": 16384, 00:05:30.515 "product_name": "Malloc disk", 00:05:30.515 "supported_io_types": { 00:05:30.515 "abort": true, 00:05:30.515 "compare": false, 00:05:30.515 "compare_and_write": false, 00:05:30.515 "copy": true, 00:05:30.515 "flush": true, 00:05:30.515 "get_zone_info": false, 00:05:30.515 "nvme_admin": false, 00:05:30.515 "nvme_io": false, 00:05:30.515 "nvme_io_md": false, 00:05:30.515 "nvme_iov_md": false, 00:05:30.515 "read": true, 00:05:30.515 "reset": true, 00:05:30.515 "seek_data": false, 00:05:30.515 "seek_hole": false, 00:05:30.515 "unmap": true, 00:05:30.515 "write": true, 00:05:30.515 "write_zeroes": true, 00:05:30.515 "zcopy": true, 00:05:30.515 "zone_append": false, 00:05:30.515 "zone_management": false 00:05:30.515 }, 00:05:30.515 "uuid": "09d65ceb-555b-48cc-a7f8-d7e54dd6dc13", 00:05:30.515 "zoned": false 00:05:30.515 } 00:05:30.515 ]' 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.516 [2024-07-12 14:45:09.121397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.516 [2024-07-12 14:45:09.121467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.516 [2024-07-12 14:45:09.121491] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1143c70 00:05:30.516 [2024-07-12 14:45:09.121501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.516 [2024-07-12 14:45:09.123098] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.516 [2024-07-12 14:45:09.123141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.516 Passthru0 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.516 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.516 { 00:05:30.516 "aliases": [ 00:05:30.516 "09d65ceb-555b-48cc-a7f8-d7e54dd6dc13" 00:05:30.516 ], 00:05:30.516 "assigned_rate_limits": { 00:05:30.516 "r_mbytes_per_sec": 0, 00:05:30.516 "rw_ios_per_sec": 0, 00:05:30.516 "rw_mbytes_per_sec": 0, 00:05:30.516 "w_mbytes_per_sec": 0 00:05:30.516 }, 00:05:30.516 "block_size": 512, 00:05:30.516 "claim_type": "exclusive_write", 00:05:30.516 "claimed": true, 00:05:30.516 "driver_specific": {}, 00:05:30.516 "memory_domains": [ 00:05:30.516 { 00:05:30.516 "dma_device_id": "system", 00:05:30.516 "dma_device_type": 1 00:05:30.516 }, 00:05:30.516 { 00:05:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.516 "dma_device_type": 2 00:05:30.516 } 00:05:30.516 ], 00:05:30.516 "name": "Malloc0", 00:05:30.516 "num_blocks": 16384, 00:05:30.516 "product_name": "Malloc disk", 00:05:30.516 "supported_io_types": { 00:05:30.516 "abort": true, 00:05:30.516 "compare": false, 00:05:30.516 "compare_and_write": false, 00:05:30.516 "copy": true, 00:05:30.516 "flush": true, 00:05:30.516 "get_zone_info": false, 00:05:30.516 "nvme_admin": false, 00:05:30.516 "nvme_io": false, 00:05:30.516 "nvme_io_md": false, 00:05:30.516 "nvme_iov_md": false, 00:05:30.516 "read": true, 00:05:30.516 "reset": true, 00:05:30.516 "seek_data": false, 00:05:30.516 "seek_hole": false, 00:05:30.516 "unmap": true, 00:05:30.516 "write": true, 00:05:30.516 "write_zeroes": true, 00:05:30.516 "zcopy": true, 00:05:30.516 "zone_append": false, 00:05:30.516 "zone_management": false 00:05:30.516 }, 00:05:30.516 "uuid": "09d65ceb-555b-48cc-a7f8-d7e54dd6dc13", 00:05:30.516 "zoned": false 00:05:30.516 }, 00:05:30.516 { 00:05:30.516 "aliases": [ 00:05:30.516 "89738f69-11d7-5432-8b1a-0095f8706cb8" 00:05:30.516 ], 00:05:30.516 "assigned_rate_limits": { 00:05:30.516 "r_mbytes_per_sec": 0, 00:05:30.516 "rw_ios_per_sec": 0, 00:05:30.516 "rw_mbytes_per_sec": 0, 00:05:30.516 "w_mbytes_per_sec": 0 00:05:30.516 }, 00:05:30.516 "block_size": 512, 00:05:30.516 "claimed": false, 00:05:30.516 "driver_specific": { 00:05:30.516 "passthru": { 00:05:30.516 "base_bdev_name": "Malloc0", 00:05:30.516 "name": "Passthru0" 00:05:30.516 } 00:05:30.516 }, 00:05:30.516 "memory_domains": [ 00:05:30.516 { 00:05:30.516 "dma_device_id": "system", 00:05:30.516 "dma_device_type": 1 00:05:30.516 }, 00:05:30.516 { 00:05:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.516 "dma_device_type": 2 00:05:30.516 } 00:05:30.516 ], 00:05:30.516 "name": "Passthru0", 00:05:30.516 "num_blocks": 16384, 00:05:30.516 "product_name": "passthru", 00:05:30.516 "supported_io_types": { 00:05:30.516 "abort": true, 00:05:30.516 "compare": false, 00:05:30.516 "compare_and_write": false, 00:05:30.516 "copy": true, 00:05:30.516 "flush": true, 00:05:30.516 "get_zone_info": false, 00:05:30.516 "nvme_admin": false, 00:05:30.516 "nvme_io": false, 00:05:30.516 "nvme_io_md": false, 00:05:30.516 "nvme_iov_md": false, 00:05:30.516 "read": true, 00:05:30.516 "reset": true, 00:05:30.516 "seek_data": false, 00:05:30.516 "seek_hole": false, 00:05:30.516 "unmap": true, 00:05:30.516 "write": true, 00:05:30.516 "write_zeroes": true, 00:05:30.516 "zcopy": true, 00:05:30.516 "zone_append": false, 00:05:30.516 "zone_management": false 00:05:30.516 }, 00:05:30.516 "uuid": "89738f69-11d7-5432-8b1a-0095f8706cb8", 00:05:30.516 "zoned": false 00:05:30.516 } 00:05:30.516 ]' 00:05:30.516 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.775 14:45:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.775 00:05:30.775 real 0m0.342s 00:05:30.775 user 0m0.233s 00:05:30.775 sys 0m0.036s 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 ************************************ 00:05:30.775 END TEST rpc_integrity 00:05:30.775 ************************************ 00:05:30.775 14:45:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.775 14:45:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.775 14:45:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.775 14:45:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.775 14:45:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 ************************************ 00:05:30.775 START TEST rpc_plugins 00:05:30.775 ************************************ 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:30.775 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.775 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.775 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.775 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.775 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.775 { 00:05:30.775 "aliases": [ 00:05:30.775 "8f374f79-430a-4a4f-a279-07f8004f6e32" 00:05:30.775 ], 00:05:30.775 "assigned_rate_limits": { 00:05:30.775 "r_mbytes_per_sec": 0, 00:05:30.775 "rw_ios_per_sec": 0, 00:05:30.775 "rw_mbytes_per_sec": 0, 00:05:30.775 "w_mbytes_per_sec": 0 00:05:30.775 }, 00:05:30.775 "block_size": 4096, 00:05:30.775 "claimed": false, 00:05:30.775 "driver_specific": {}, 00:05:30.775 "memory_domains": [ 00:05:30.775 { 00:05:30.775 "dma_device_id": "system", 00:05:30.775 "dma_device_type": 1 00:05:30.775 }, 00:05:30.775 { 00:05:30.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.775 "dma_device_type": 2 00:05:30.775 } 00:05:30.775 ], 00:05:30.775 "name": "Malloc1", 00:05:30.775 "num_blocks": 256, 00:05:30.775 "product_name": "Malloc disk", 00:05:30.775 "supported_io_types": { 00:05:30.775 "abort": true, 00:05:30.775 "compare": false, 00:05:30.775 "compare_and_write": false, 00:05:30.775 "copy": true, 00:05:30.775 "flush": true, 00:05:30.775 "get_zone_info": false, 00:05:30.775 "nvme_admin": false, 00:05:30.775 "nvme_io": false, 00:05:30.775 "nvme_io_md": false, 00:05:30.775 "nvme_iov_md": false, 00:05:30.775 "read": true, 00:05:30.775 "reset": true, 00:05:30.775 "seek_data": false, 00:05:30.775 "seek_hole": false, 00:05:30.775 "unmap": true, 00:05:30.775 "write": true, 00:05:30.775 "write_zeroes": true, 00:05:30.775 "zcopy": true, 00:05:30.775 "zone_append": false, 00:05:30.775 "zone_management": false 00:05:30.775 }, 00:05:30.775 "uuid": "8f374f79-430a-4a4f-a279-07f8004f6e32", 00:05:30.775 "zoned": false 00:05:30.775 } 00:05:30.775 ]' 00:05:30.775 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.034 14:45:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.034 00:05:31.034 real 0m0.186s 00:05:31.034 user 0m0.135s 00:05:31.034 sys 0m0.017s 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.034 ************************************ 00:05:31.034 14:45:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.034 END TEST rpc_plugins 00:05:31.034 ************************************ 00:05:31.034 14:45:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.034 14:45:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.034 14:45:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.034 14:45:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.034 14:45:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.034 ************************************ 00:05:31.034 START TEST rpc_trace_cmd_test 00:05:31.034 ************************************ 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.034 "bdev": { 00:05:31.034 "mask": "0x8", 00:05:31.034 "tpoint_mask": "0xffffffffffffffff" 00:05:31.034 }, 00:05:31.034 "bdev_nvme": { 00:05:31.034 "mask": "0x4000", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "blobfs": { 00:05:31.034 "mask": "0x80", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "dsa": { 00:05:31.034 "mask": "0x200", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "ftl": { 00:05:31.034 "mask": "0x40", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "iaa": { 00:05:31.034 "mask": "0x1000", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "iscsi_conn": { 00:05:31.034 "mask": "0x2", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "nvme_pcie": { 00:05:31.034 "mask": "0x800", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "nvme_tcp": { 00:05:31.034 "mask": "0x2000", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "nvmf_rdma": { 00:05:31.034 "mask": "0x10", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "nvmf_tcp": { 00:05:31.034 "mask": "0x20", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "scsi": { 00:05:31.034 "mask": "0x4", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "sock": { 00:05:31.034 "mask": "0x8000", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "thread": { 00:05:31.034 "mask": "0x400", 00:05:31.034 "tpoint_mask": "0x0" 00:05:31.034 }, 00:05:31.034 "tpoint_group_mask": "0x8", 00:05:31.034 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60682" 00:05:31.034 }' 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:31.034 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.293 00:05:31.293 real 0m0.262s 00:05:31.293 user 0m0.227s 00:05:31.293 sys 0m0.026s 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.293 14:45:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.293 ************************************ 00:05:31.293 END TEST rpc_trace_cmd_test 00:05:31.293 ************************************ 00:05:31.293 14:45:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.293 14:45:09 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:31.293 14:45:09 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:31.293 14:45:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.293 14:45:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.293 14:45:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.293 ************************************ 00:05:31.293 START TEST go_rpc 00:05:31.293 ************************************ 00:05:31.293 14:45:09 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:31.293 14:45:09 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:31.293 14:45:09 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:31.293 14:45:09 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:31.293 14:45:09 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:31.552 14:45:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.552 14:45:09 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.552 14:45:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.552 14:45:09 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.552 14:45:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:31.552 14:45:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:31.552 14:45:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["289cbc6f-17a7-422e-94e3-d8f78f270a70"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"289cbc6f-17a7-422e-94e3-d8f78f270a70","zoned":false}]' 00:05:31.552 14:45:09 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.552 14:45:10 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.552 14:45:10 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.552 14:45:10 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:31.552 14:45:10 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:31.552 00:05:31.552 real 0m0.208s 00:05:31.552 user 0m0.152s 00:05:31.552 sys 0m0.027s 00:05:31.552 14:45:10 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.552 14:45:10 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.552 ************************************ 00:05:31.552 END TEST go_rpc 00:05:31.552 ************************************ 00:05:31.552 14:45:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.552 14:45:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.552 14:45:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.552 14:45:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.552 14:45:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.552 14:45:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.552 ************************************ 00:05:31.552 START TEST rpc_daemon_integrity 00:05:31.552 ************************************ 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.552 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.812 { 00:05:31.812 "aliases": [ 00:05:31.812 "aa7f7625-432a-4ae8-8c19-8feeb02b4954" 00:05:31.812 ], 00:05:31.812 "assigned_rate_limits": { 00:05:31.812 "r_mbytes_per_sec": 0, 00:05:31.812 "rw_ios_per_sec": 0, 00:05:31.812 "rw_mbytes_per_sec": 0, 00:05:31.812 "w_mbytes_per_sec": 0 00:05:31.812 }, 00:05:31.812 "block_size": 512, 00:05:31.812 "claimed": false, 00:05:31.812 "driver_specific": {}, 00:05:31.812 "memory_domains": [ 00:05:31.812 { 00:05:31.812 "dma_device_id": "system", 00:05:31.812 "dma_device_type": 1 00:05:31.812 }, 00:05:31.812 { 00:05:31.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.812 "dma_device_type": 2 00:05:31.812 } 00:05:31.812 ], 00:05:31.812 "name": "Malloc3", 00:05:31.812 "num_blocks": 16384, 00:05:31.812 "product_name": "Malloc disk", 00:05:31.812 "supported_io_types": { 00:05:31.812 "abort": true, 00:05:31.812 "compare": false, 00:05:31.812 "compare_and_write": false, 00:05:31.812 "copy": true, 00:05:31.812 "flush": true, 00:05:31.812 "get_zone_info": false, 00:05:31.812 "nvme_admin": false, 00:05:31.812 "nvme_io": false, 00:05:31.812 "nvme_io_md": false, 00:05:31.812 "nvme_iov_md": false, 00:05:31.812 "read": true, 00:05:31.812 "reset": true, 00:05:31.812 "seek_data": false, 00:05:31.812 "seek_hole": false, 00:05:31.812 "unmap": true, 00:05:31.812 "write": true, 00:05:31.812 "write_zeroes": true, 00:05:31.812 "zcopy": true, 00:05:31.812 "zone_append": false, 00:05:31.812 "zone_management": false 00:05:31.812 }, 00:05:31.812 "uuid": "aa7f7625-432a-4ae8-8c19-8feeb02b4954", 00:05:31.812 "zoned": false 00:05:31.812 } 00:05:31.812 ]' 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 [2024-07-12 14:45:10.273804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:31.812 [2024-07-12 14:45:10.273858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.812 [2024-07-12 14:45:10.273879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1285290 00:05:31.812 [2024-07-12 14:45:10.273889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.812 [2024-07-12 14:45:10.275313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.812 [2024-07-12 14:45:10.275350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.812 Passthru0 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.812 { 00:05:31.812 "aliases": [ 00:05:31.812 "aa7f7625-432a-4ae8-8c19-8feeb02b4954" 00:05:31.812 ], 00:05:31.812 "assigned_rate_limits": { 00:05:31.812 "r_mbytes_per_sec": 0, 00:05:31.812 "rw_ios_per_sec": 0, 00:05:31.812 "rw_mbytes_per_sec": 0, 00:05:31.812 "w_mbytes_per_sec": 0 00:05:31.812 }, 00:05:31.812 "block_size": 512, 00:05:31.812 "claim_type": "exclusive_write", 00:05:31.812 "claimed": true, 00:05:31.812 "driver_specific": {}, 00:05:31.812 "memory_domains": [ 00:05:31.812 { 00:05:31.812 "dma_device_id": "system", 00:05:31.812 "dma_device_type": 1 00:05:31.812 }, 00:05:31.812 { 00:05:31.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.812 "dma_device_type": 2 00:05:31.812 } 00:05:31.812 ], 00:05:31.812 "name": "Malloc3", 00:05:31.812 "num_blocks": 16384, 00:05:31.812 "product_name": "Malloc disk", 00:05:31.812 "supported_io_types": { 00:05:31.812 "abort": true, 00:05:31.812 "compare": false, 00:05:31.812 "compare_and_write": false, 00:05:31.812 "copy": true, 00:05:31.812 "flush": true, 00:05:31.812 "get_zone_info": false, 00:05:31.812 "nvme_admin": false, 00:05:31.812 "nvme_io": false, 00:05:31.812 "nvme_io_md": false, 00:05:31.812 "nvme_iov_md": false, 00:05:31.812 "read": true, 00:05:31.812 "reset": true, 00:05:31.812 "seek_data": false, 00:05:31.812 "seek_hole": false, 00:05:31.812 "unmap": true, 00:05:31.812 "write": true, 00:05:31.812 "write_zeroes": true, 00:05:31.812 "zcopy": true, 00:05:31.812 "zone_append": false, 00:05:31.812 "zone_management": false 00:05:31.812 }, 00:05:31.812 "uuid": "aa7f7625-432a-4ae8-8c19-8feeb02b4954", 00:05:31.812 "zoned": false 00:05:31.812 }, 00:05:31.812 { 00:05:31.812 "aliases": [ 00:05:31.812 "48625ae2-2fc9-55e4-a646-0995b8cc3529" 00:05:31.812 ], 00:05:31.812 "assigned_rate_limits": { 00:05:31.812 "r_mbytes_per_sec": 0, 00:05:31.812 "rw_ios_per_sec": 0, 00:05:31.812 "rw_mbytes_per_sec": 0, 00:05:31.812 "w_mbytes_per_sec": 0 00:05:31.812 }, 00:05:31.812 "block_size": 512, 00:05:31.812 "claimed": false, 00:05:31.812 "driver_specific": { 00:05:31.812 "passthru": { 00:05:31.812 "base_bdev_name": "Malloc3", 00:05:31.812 "name": "Passthru0" 00:05:31.812 } 00:05:31.812 }, 00:05:31.812 "memory_domains": [ 00:05:31.812 { 00:05:31.812 "dma_device_id": "system", 00:05:31.812 "dma_device_type": 1 00:05:31.812 }, 00:05:31.812 { 00:05:31.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.812 "dma_device_type": 2 00:05:31.812 } 00:05:31.812 ], 00:05:31.812 "name": "Passthru0", 00:05:31.812 "num_blocks": 16384, 00:05:31.812 "product_name": "passthru", 00:05:31.812 "supported_io_types": { 00:05:31.812 "abort": true, 00:05:31.812 "compare": false, 00:05:31.812 "compare_and_write": false, 00:05:31.812 "copy": true, 00:05:31.812 "flush": true, 00:05:31.812 "get_zone_info": false, 00:05:31.812 "nvme_admin": false, 00:05:31.812 "nvme_io": false, 00:05:31.812 "nvme_io_md": false, 00:05:31.812 "nvme_iov_md": false, 00:05:31.812 "read": true, 00:05:31.812 "reset": true, 00:05:31.812 "seek_data": false, 00:05:31.812 "seek_hole": false, 00:05:31.812 "unmap": true, 00:05:31.812 "write": true, 00:05:31.812 "write_zeroes": true, 00:05:31.812 "zcopy": true, 00:05:31.812 "zone_append": false, 00:05:31.812 "zone_management": false 00:05:31.812 }, 00:05:31.812 "uuid": "48625ae2-2fc9-55e4-a646-0995b8cc3529", 00:05:31.812 "zoned": false 00:05:31.812 } 00:05:31.812 ]' 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.812 00:05:31.812 real 0m0.305s 00:05:31.812 user 0m0.203s 00:05:31.812 sys 0m0.039s 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.812 14:45:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.812 ************************************ 00:05:31.812 END TEST rpc_daemon_integrity 00:05:31.812 ************************************ 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.071 14:45:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.071 14:45:10 rpc -- rpc/rpc.sh@84 -- # killprocess 60682 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@948 -- # '[' -z 60682 ']' 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@952 -- # kill -0 60682 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60682 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.071 killing process with pid 60682 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60682' 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@967 -- # kill 60682 00:05:32.071 14:45:10 rpc -- common/autotest_common.sh@972 -- # wait 60682 00:05:32.329 00:05:32.329 real 0m3.000s 00:05:32.329 user 0m4.207s 00:05:32.329 sys 0m0.603s 00:05:32.329 14:45:10 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.329 14:45:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.329 ************************************ 00:05:32.329 END TEST rpc 00:05:32.329 ************************************ 00:05:32.329 14:45:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.329 14:45:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.329 14:45:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.329 14:45:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.329 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.329 ************************************ 00:05:32.329 START TEST skip_rpc 00:05:32.329 ************************************ 00:05:32.329 14:45:10 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.329 * Looking for test storage... 00:05:32.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.329 14:45:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.329 14:45:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.329 14:45:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.329 14:45:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.329 14:45:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.329 14:45:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.329 ************************************ 00:05:32.329 START TEST skip_rpc 00:05:32.329 ************************************ 00:05:32.329 14:45:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:32.329 14:45:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60942 00:05:32.329 14:45:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.329 14:45:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.329 14:45:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:32.329 [2024-07-12 14:45:10.955211] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:32.329 [2024-07-12 14:45:10.955300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:05:32.587 [2024-07-12 14:45:11.096356] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.587 [2024-07-12 14:45:11.169103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.926 2024/07/12 14:45:15 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60942 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60942 ']' 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60942 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60942 00:05:37.926 killing process with pid 60942 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60942' 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60942 00:05:37.926 14:45:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60942 00:05:37.926 00:05:37.926 real 0m5.287s 00:05:37.926 user 0m5.002s 00:05:37.926 sys 0m0.175s 00:05:37.926 14:45:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.926 14:45:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.926 ************************************ 00:05:37.926 END TEST skip_rpc 00:05:37.926 ************************************ 00:05:37.926 14:45:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.926 14:45:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.926 14:45:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.926 14:45:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.926 14:45:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.926 ************************************ 00:05:37.926 START TEST skip_rpc_with_json 00:05:37.926 ************************************ 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61036 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61036 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61036 ']' 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.926 14:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.926 [2024-07-12 14:45:16.298154] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:37.926 [2024-07-12 14:45:16.298272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61036 ] 00:05:37.926 [2024-07-12 14:45:16.440659] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.926 [2024-07-12 14:45:16.512919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.861 [2024-07-12 14:45:17.285641] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.861 2024/07/12 14:45:17 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:38.861 request: 00:05:38.861 { 00:05:38.861 "method": "nvmf_get_transports", 00:05:38.861 "params": { 00:05:38.861 "trtype": "tcp" 00:05:38.861 } 00:05:38.861 } 00:05:38.861 Got JSON-RPC error response 00:05:38.861 GoRPCClient: error on JSON-RPC call 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.861 [2024-07-12 14:45:17.297753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.861 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.861 { 00:05:38.861 "subsystems": [ 00:05:38.861 { 00:05:38.861 "subsystem": "keyring", 00:05:38.861 "config": [] 00:05:38.861 }, 00:05:38.861 { 00:05:38.861 "subsystem": "iobuf", 00:05:38.861 "config": [ 00:05:38.861 { 00:05:38.861 "method": "iobuf_set_options", 00:05:38.861 "params": { 00:05:38.861 "large_bufsize": 135168, 00:05:38.861 "large_pool_count": 1024, 00:05:38.861 "small_bufsize": 8192, 00:05:38.861 "small_pool_count": 8192 00:05:38.861 } 00:05:38.861 } 00:05:38.861 ] 00:05:38.861 }, 00:05:38.861 { 00:05:38.861 "subsystem": "sock", 00:05:38.861 "config": [ 00:05:38.861 { 00:05:38.861 "method": "sock_set_default_impl", 00:05:38.861 "params": { 00:05:38.861 "impl_name": "posix" 00:05:38.861 } 00:05:38.861 }, 00:05:38.861 { 00:05:38.861 "method": "sock_impl_set_options", 00:05:38.861 "params": { 00:05:38.861 "enable_ktls": false, 00:05:38.861 "enable_placement_id": 0, 00:05:38.861 "enable_quickack": false, 00:05:38.861 "enable_recv_pipe": true, 00:05:38.861 "enable_zerocopy_send_client": false, 00:05:38.861 "enable_zerocopy_send_server": true, 00:05:38.861 "impl_name": "ssl", 00:05:38.861 "recv_buf_size": 4096, 00:05:38.861 "send_buf_size": 4096, 00:05:38.861 "tls_version": 0, 00:05:38.862 "zerocopy_threshold": 0 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "sock_impl_set_options", 00:05:38.862 "params": { 00:05:38.862 "enable_ktls": false, 00:05:38.862 "enable_placement_id": 0, 00:05:38.862 "enable_quickack": false, 00:05:38.862 "enable_recv_pipe": true, 00:05:38.862 "enable_zerocopy_send_client": false, 00:05:38.862 "enable_zerocopy_send_server": true, 00:05:38.862 "impl_name": "posix", 00:05:38.862 "recv_buf_size": 2097152, 00:05:38.862 "send_buf_size": 2097152, 00:05:38.862 "tls_version": 0, 00:05:38.862 "zerocopy_threshold": 0 00:05:38.862 } 00:05:38.862 } 00:05:38.862 ] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "vmd", 00:05:38.862 "config": [] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "accel", 00:05:38.862 "config": [ 00:05:38.862 { 00:05:38.862 "method": "accel_set_options", 00:05:38.862 "params": { 00:05:38.862 "buf_count": 2048, 00:05:38.862 "large_cache_size": 16, 00:05:38.862 "sequence_count": 2048, 00:05:38.862 "small_cache_size": 128, 00:05:38.862 "task_count": 2048 00:05:38.862 } 00:05:38.862 } 00:05:38.862 ] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "bdev", 00:05:38.862 "config": [ 00:05:38.862 { 00:05:38.862 "method": "bdev_set_options", 00:05:38.862 "params": { 00:05:38.862 "bdev_auto_examine": true, 00:05:38.862 "bdev_io_cache_size": 256, 00:05:38.862 "bdev_io_pool_size": 65535, 00:05:38.862 "iobuf_large_cache_size": 16, 00:05:38.862 "iobuf_small_cache_size": 128 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "bdev_raid_set_options", 00:05:38.862 "params": { 00:05:38.862 "process_window_size_kb": 1024 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "bdev_iscsi_set_options", 00:05:38.862 "params": { 00:05:38.862 "timeout_sec": 30 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "bdev_nvme_set_options", 00:05:38.862 "params": { 00:05:38.862 "action_on_timeout": "none", 00:05:38.862 "allow_accel_sequence": false, 00:05:38.862 "arbitration_burst": 0, 00:05:38.862 "bdev_retry_count": 3, 00:05:38.862 "ctrlr_loss_timeout_sec": 0, 00:05:38.862 "delay_cmd_submit": true, 00:05:38.862 "dhchap_dhgroups": [ 00:05:38.862 "null", 00:05:38.862 "ffdhe2048", 00:05:38.862 "ffdhe3072", 00:05:38.862 "ffdhe4096", 00:05:38.862 "ffdhe6144", 00:05:38.862 "ffdhe8192" 00:05:38.862 ], 00:05:38.862 "dhchap_digests": [ 00:05:38.862 "sha256", 00:05:38.862 "sha384", 00:05:38.862 "sha512" 00:05:38.862 ], 00:05:38.862 "disable_auto_failback": false, 00:05:38.862 "fast_io_fail_timeout_sec": 0, 00:05:38.862 "generate_uuids": false, 00:05:38.862 "high_priority_weight": 0, 00:05:38.862 "io_path_stat": false, 00:05:38.862 "io_queue_requests": 0, 00:05:38.862 "keep_alive_timeout_ms": 10000, 00:05:38.862 "low_priority_weight": 0, 00:05:38.862 "medium_priority_weight": 0, 00:05:38.862 "nvme_adminq_poll_period_us": 10000, 00:05:38.862 "nvme_error_stat": false, 00:05:38.862 "nvme_ioq_poll_period_us": 0, 00:05:38.862 "rdma_cm_event_timeout_ms": 0, 00:05:38.862 "rdma_max_cq_size": 0, 00:05:38.862 "rdma_srq_size": 0, 00:05:38.862 "reconnect_delay_sec": 0, 00:05:38.862 "timeout_admin_us": 0, 00:05:38.862 "timeout_us": 0, 00:05:38.862 "transport_ack_timeout": 0, 00:05:38.862 "transport_retry_count": 4, 00:05:38.862 "transport_tos": 0 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "bdev_nvme_set_hotplug", 00:05:38.862 "params": { 00:05:38.862 "enable": false, 00:05:38.862 "period_us": 100000 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "bdev_wait_for_examine" 00:05:38.862 } 00:05:38.862 ] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "scsi", 00:05:38.862 "config": null 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "scheduler", 00:05:38.862 "config": [ 00:05:38.862 { 00:05:38.862 "method": "framework_set_scheduler", 00:05:38.862 "params": { 00:05:38.862 "name": "static" 00:05:38.862 } 00:05:38.862 } 00:05:38.862 ] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "vhost_scsi", 00:05:38.862 "config": [] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "vhost_blk", 00:05:38.862 "config": [] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "ublk", 00:05:38.862 "config": [] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "nbd", 00:05:38.862 "config": [] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "nvmf", 00:05:38.862 "config": [ 00:05:38.862 { 00:05:38.862 "method": "nvmf_set_config", 00:05:38.862 "params": { 00:05:38.862 "admin_cmd_passthru": { 00:05:38.862 "identify_ctrlr": false 00:05:38.862 }, 00:05:38.862 "discovery_filter": "match_any" 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "nvmf_set_max_subsystems", 00:05:38.862 "params": { 00:05:38.862 "max_subsystems": 1024 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "nvmf_set_crdt", 00:05:38.862 "params": { 00:05:38.862 "crdt1": 0, 00:05:38.862 "crdt2": 0, 00:05:38.862 "crdt3": 0 00:05:38.862 } 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "method": "nvmf_create_transport", 00:05:38.862 "params": { 00:05:38.862 "abort_timeout_sec": 1, 00:05:38.862 "ack_timeout": 0, 00:05:38.862 "buf_cache_size": 4294967295, 00:05:38.862 "c2h_success": true, 00:05:38.862 "data_wr_pool_size": 0, 00:05:38.862 "dif_insert_or_strip": false, 00:05:38.862 "in_capsule_data_size": 4096, 00:05:38.862 "io_unit_size": 131072, 00:05:38.862 "max_aq_depth": 128, 00:05:38.862 "max_io_qpairs_per_ctrlr": 127, 00:05:38.862 "max_io_size": 131072, 00:05:38.862 "max_queue_depth": 128, 00:05:38.862 "num_shared_buffers": 511, 00:05:38.862 "sock_priority": 0, 00:05:38.862 "trtype": "TCP", 00:05:38.862 "zcopy": false 00:05:38.862 } 00:05:38.862 } 00:05:38.862 ] 00:05:38.862 }, 00:05:38.862 { 00:05:38.862 "subsystem": "iscsi", 00:05:38.862 "config": [ 00:05:38.862 { 00:05:38.862 "method": "iscsi_set_options", 00:05:38.862 "params": { 00:05:38.863 "allow_duplicated_isid": false, 00:05:38.863 "chap_group": 0, 00:05:38.863 "data_out_pool_size": 2048, 00:05:38.863 "default_time2retain": 20, 00:05:38.863 "default_time2wait": 2, 00:05:38.863 "disable_chap": false, 00:05:38.863 "error_recovery_level": 0, 00:05:38.863 "first_burst_length": 8192, 00:05:38.863 "immediate_data": true, 00:05:38.863 "immediate_data_pool_size": 16384, 00:05:38.863 "max_connections_per_session": 2, 00:05:38.863 "max_large_datain_per_connection": 64, 00:05:38.863 "max_queue_depth": 64, 00:05:38.863 "max_r2t_per_connection": 4, 00:05:38.863 "max_sessions": 128, 00:05:38.863 "mutual_chap": false, 00:05:38.863 "node_base": "iqn.2016-06.io.spdk", 00:05:38.863 "nop_in_interval": 30, 00:05:38.863 "nop_timeout": 60, 00:05:38.863 "pdu_pool_size": 36864, 00:05:38.863 "require_chap": false 00:05:38.863 } 00:05:38.863 } 00:05:38.863 ] 00:05:38.863 } 00:05:38.863 ] 00:05:38.863 } 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61036 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61036 ']' 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61036 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61036 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.863 killing process with pid 61036 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61036' 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61036 00:05:38.863 14:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61036 00:05:39.122 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61070 00:05:39.122 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.122 14:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61070 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61070 ']' 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61070 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61070 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61070' 00:05:44.383 killing process with pid 61070 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61070 00:05:44.383 14:45:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61070 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.649 00:05:44.649 real 0m6.841s 00:05:44.649 user 0m6.773s 00:05:44.649 sys 0m0.457s 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.649 ************************************ 00:05:44.649 END TEST skip_rpc_with_json 00:05:44.649 ************************************ 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.649 14:45:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.649 ************************************ 00:05:44.649 START TEST skip_rpc_with_delay 00:05:44.649 ************************************ 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.649 [2024-07-12 14:45:23.190400] app.c: 836:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.649 [2024-07-12 14:45:23.190555] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.649 00:05:44.649 real 0m0.090s 00:05:44.649 user 0m0.053s 00:05:44.649 sys 0m0.036s 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.649 14:45:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.649 ************************************ 00:05:44.649 END TEST skip_rpc_with_delay 00:05:44.649 ************************************ 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.649 14:45:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.649 14:45:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.649 14:45:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.649 14:45:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.649 ************************************ 00:05:44.649 START TEST exit_on_failed_rpc_init 00:05:44.649 ************************************ 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61180 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61180 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61180 ']' 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.649 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.922 [2024-07-12 14:45:23.332850] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:44.922 [2024-07-12 14:45:23.332967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61180 ] 00:05:44.922 [2024-07-12 14:45:23.470984] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.922 [2024-07-12 14:45:23.530835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:45.180 14:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.180 [2024-07-12 14:45:23.762085] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:45.180 [2024-07-12 14:45:23.762186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:05:45.438 [2024-07-12 14:45:23.900417] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.438 [2024-07-12 14:45:23.971250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.438 [2024-07-12 14:45:23.971347] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:45.438 [2024-07-12 14:45:23.971363] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:45.438 [2024-07-12 14:45:23.971374] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61180 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61180 ']' 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61180 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61180 00:05:45.438 killing process with pid 61180 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61180' 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61180 00:05:45.438 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61180 00:05:46.002 ************************************ 00:05:46.002 END TEST exit_on_failed_rpc_init 00:05:46.002 ************************************ 00:05:46.003 00:05:46.003 real 0m1.089s 00:05:46.003 user 0m1.285s 00:05:46.003 sys 0m0.272s 00:05:46.003 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.003 14:45:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.003 14:45:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.003 14:45:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.003 00:05:46.003 real 0m13.581s 00:05:46.003 user 0m13.209s 00:05:46.003 sys 0m1.104s 00:05:46.003 14:45:24 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.003 14:45:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.003 ************************************ 00:05:46.003 END TEST skip_rpc 00:05:46.003 ************************************ 00:05:46.003 14:45:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.003 14:45:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.003 14:45:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.003 14:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.003 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.003 ************************************ 00:05:46.003 START TEST rpc_client 00:05:46.003 ************************************ 00:05:46.003 14:45:24 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.003 * Looking for test storage... 00:05:46.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:46.003 14:45:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:46.003 OK 00:05:46.003 14:45:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.003 ************************************ 00:05:46.003 END TEST rpc_client 00:05:46.003 ************************************ 00:05:46.003 00:05:46.003 real 0m0.096s 00:05:46.003 user 0m0.039s 00:05:46.003 sys 0m0.062s 00:05:46.003 14:45:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.003 14:45:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.003 14:45:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.003 14:45:24 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.003 14:45:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.003 14:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.003 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.003 ************************************ 00:05:46.003 START TEST json_config 00:05:46.003 ************************************ 00:05:46.003 14:45:24 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.003 14:45:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.003 14:45:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.003 14:45:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.003 14:45:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.003 14:45:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.003 14:45:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.003 14:45:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.003 14:45:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@47 -- # : 0 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.003 14:45:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:46.003 INFO: JSON configuration test init 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:46.003 14:45:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:46.003 14:45:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.003 14:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.262 14:45:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.262 14:45:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.262 14:45:24 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.262 14:45:24 json_config -- json_config/common.sh@10 -- # shift 00:05:46.262 Waiting for target to run... 00:05:46.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.262 14:45:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.262 14:45:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.262 14:45:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.262 14:45:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.262 14:45:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.262 14:45:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61314 00:05:46.262 14:45:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.262 14:45:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.262 14:45:24 json_config -- json_config/common.sh@25 -- # waitforlisten 61314 /var/tmp/spdk_tgt.sock 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 61314 ']' 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.262 14:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.262 [2024-07-12 14:45:24.730229] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:46.262 [2024-07-12 14:45:24.730578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:05:46.520 [2024-07-12 14:45:25.067034] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.520 [2024-07-12 14:45:25.133229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:47.452 14:45:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.452 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.452 14:45:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.452 14:45:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:47.452 14:45:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:47.709 14:45:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.709 14:45:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:47.709 14:45:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:47.709 14:45:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:47.966 14:45:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.966 14:45:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:47.966 14:45:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.966 14:45:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:47.966 14:45:26 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.966 14:45:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.531 MallocForNvmf0 00:05:48.531 14:45:26 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.531 14:45:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.531 MallocForNvmf1 00:05:48.789 14:45:27 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.789 14:45:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.789 [2024-07-12 14:45:27.419593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.087 14:45:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.087 14:45:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.087 14:45:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.088 14:45:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.372 14:45:27 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.372 14:45:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.630 14:45:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.630 14:45:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.888 [2024-07-12 14:45:28.412555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.888 14:45:28 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:49.888 14:45:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.888 14:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.888 14:45:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:49.888 14:45:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.888 14:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.888 14:45:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:49.888 14:45:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.888 14:45:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.146 MallocBdevForConfigChangeCheck 00:05:50.146 14:45:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:50.146 14:45:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.146 14:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.146 14:45:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:50.146 14:45:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.712 INFO: shutting down applications... 00:05:50.712 14:45:29 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:50.712 14:45:29 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:50.712 14:45:29 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:50.712 14:45:29 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:50.712 14:45:29 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.971 Calling clear_iscsi_subsystem 00:05:50.971 Calling clear_nvmf_subsystem 00:05:50.971 Calling clear_nbd_subsystem 00:05:50.971 Calling clear_ublk_subsystem 00:05:50.971 Calling clear_vhost_blk_subsystem 00:05:50.971 Calling clear_vhost_scsi_subsystem 00:05:50.971 Calling clear_bdev_subsystem 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.971 14:45:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.536 14:45:29 json_config -- json_config/json_config.sh@345 -- # break 00:05:51.536 14:45:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:51.536 14:45:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:51.536 14:45:29 json_config -- json_config/common.sh@31 -- # local app=target 00:05:51.536 14:45:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.536 14:45:29 json_config -- json_config/common.sh@35 -- # [[ -n 61314 ]] 00:05:51.536 14:45:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61314 00:05:51.536 14:45:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.536 14:45:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.536 14:45:29 json_config -- json_config/common.sh@41 -- # kill -0 61314 00:05:51.536 14:45:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.793 14:45:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.793 14:45:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.793 14:45:30 json_config -- json_config/common.sh@41 -- # kill -0 61314 00:05:51.793 14:45:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.793 14:45:30 json_config -- json_config/common.sh@43 -- # break 00:05:51.793 14:45:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.793 SPDK target shutdown done 00:05:51.793 14:45:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.793 INFO: relaunching applications... 00:05:51.793 14:45:30 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:51.793 14:45:30 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.793 14:45:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.793 14:45:30 json_config -- json_config/common.sh@10 -- # shift 00:05:51.793 14:45:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.793 14:45:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.793 14:45:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.793 14:45:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.793 14:45:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.793 14:45:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61589 00:05:51.793 14:45:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.793 Waiting for target to run... 00:05:51.793 14:45:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.793 14:45:30 json_config -- json_config/common.sh@25 -- # waitforlisten 61589 /var/tmp/spdk_tgt.sock 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 61589 ']' 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.793 14:45:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.051 [2024-07-12 14:45:30.491676] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:52.051 [2024-07-12 14:45:30.491771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61589 ] 00:05:52.308 [2024-07-12 14:45:30.786200] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.308 [2024-07-12 14:45:30.840752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.567 [2024-07-12 14:45:31.158127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.567 [2024-07-12 14:45:31.190189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.825 14:45:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.825 14:45:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:52.825 00:05:52.825 14:45:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.825 14:45:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:52.825 INFO: Checking if target configuration is the same... 00:05:52.825 14:45:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.825 14:45:31 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.825 14:45:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:52.825 14:45:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.084 + '[' 2 -ne 2 ']' 00:05:53.084 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:53.084 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:53.084 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:53.084 +++ basename /dev/fd/62 00:05:53.084 ++ mktemp /tmp/62.XXX 00:05:53.084 + tmp_file_1=/tmp/62.2Dq 00:05:53.084 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.084 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.084 + tmp_file_2=/tmp/spdk_tgt_config.json.fLL 00:05:53.084 + ret=0 00:05:53.084 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.342 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.342 + diff -u /tmp/62.2Dq /tmp/spdk_tgt_config.json.fLL 00:05:53.342 INFO: JSON config files are the same 00:05:53.342 + echo 'INFO: JSON config files are the same' 00:05:53.342 + rm /tmp/62.2Dq /tmp/spdk_tgt_config.json.fLL 00:05:53.342 + exit 0 00:05:53.342 14:45:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:53.342 INFO: changing configuration and checking if this can be detected... 00:05:53.342 14:45:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:53.342 14:45:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.342 14:45:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.908 14:45:32 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.908 14:45:32 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:53.908 14:45:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.908 + '[' 2 -ne 2 ']' 00:05:53.908 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:53.908 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:53.908 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:53.908 +++ basename /dev/fd/62 00:05:53.908 ++ mktemp /tmp/62.XXX 00:05:53.908 + tmp_file_1=/tmp/62.gOK 00:05:53.908 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.908 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.908 + tmp_file_2=/tmp/spdk_tgt_config.json.f6y 00:05:53.908 + ret=0 00:05:53.908 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.166 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.166 + diff -u /tmp/62.gOK /tmp/spdk_tgt_config.json.f6y 00:05:54.166 + ret=1 00:05:54.166 + echo '=== Start of file: /tmp/62.gOK ===' 00:05:54.166 + cat /tmp/62.gOK 00:05:54.166 + echo '=== End of file: /tmp/62.gOK ===' 00:05:54.166 + echo '' 00:05:54.166 + echo '=== Start of file: /tmp/spdk_tgt_config.json.f6y ===' 00:05:54.166 + cat /tmp/spdk_tgt_config.json.f6y 00:05:54.166 + echo '=== End of file: /tmp/spdk_tgt_config.json.f6y ===' 00:05:54.166 + echo '' 00:05:54.166 + rm /tmp/62.gOK /tmp/spdk_tgt_config.json.f6y 00:05:54.166 + exit 1 00:05:54.166 INFO: configuration change detected. 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:54.166 14:45:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.166 14:45:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 61589 ]] 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:54.166 14:45:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:54.166 14:45:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.167 14:45:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:54.167 14:45:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:54.167 14:45:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.167 14:45:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.436 14:45:32 json_config -- json_config/json_config.sh@323 -- # killprocess 61589 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@948 -- # '[' -z 61589 ']' 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@952 -- # kill -0 61589 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@953 -- # uname 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61589 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.436 killing process with pid 61589 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61589' 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@967 -- # kill 61589 00:05:54.436 14:45:32 json_config -- common/autotest_common.sh@972 -- # wait 61589 00:05:54.436 14:45:33 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.437 14:45:33 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:54.437 14:45:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.437 14:45:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.437 14:45:33 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:54.437 INFO: Success 00:05:54.437 14:45:33 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:54.437 00:05:54.437 real 0m8.501s 00:05:54.437 user 0m12.593s 00:05:54.437 sys 0m1.492s 00:05:54.437 ************************************ 00:05:54.437 END TEST json_config 00:05:54.437 ************************************ 00:05:54.437 14:45:33 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.437 14:45:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.746 14:45:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.746 14:45:33 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:54.746 14:45:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.746 14:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.746 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:54.746 ************************************ 00:05:54.746 START TEST json_config_extra_key 00:05:54.746 ************************************ 00:05:54.746 14:45:33 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:54.746 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.746 14:45:33 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.746 14:45:33 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.746 14:45:33 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.746 14:45:33 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.746 14:45:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.746 14:45:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.746 14:45:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.746 14:45:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:54.747 14:45:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.747 14:45:33 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:54.747 INFO: launching applications... 00:05:54.747 Waiting for target to run... 00:05:54.747 14:45:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61759 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:54.747 14:45:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61759 /var/tmp/spdk_tgt.sock 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61759 ']' 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.747 14:45:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.747 [2024-07-12 14:45:33.246563] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:54.747 [2024-07-12 14:45:33.246863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:05:55.005 [2024-07-12 14:45:33.550107] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.005 [2024-07-12 14:45:33.603903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.939 14:45:34 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.939 14:45:34 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:55.939 00:05:55.939 14:45:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:55.939 INFO: shutting down applications... 00:05:55.939 14:45:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61759 ]] 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61759 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61759 00:05:55.939 14:45:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61759 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.197 SPDK target shutdown done 00:05:56.197 Success 00:05:56.197 14:45:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.197 14:45:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:56.197 ************************************ 00:05:56.197 END TEST json_config_extra_key 00:05:56.197 ************************************ 00:05:56.197 00:05:56.197 real 0m1.683s 00:05:56.197 user 0m1.629s 00:05:56.197 sys 0m0.289s 00:05:56.197 14:45:34 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.197 14:45:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.197 14:45:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.197 14:45:34 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.197 14:45:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.197 14:45:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.197 14:45:34 -- common/autotest_common.sh@10 -- # set +x 00:05:56.455 ************************************ 00:05:56.455 START TEST alias_rpc 00:05:56.455 ************************************ 00:05:56.455 14:45:34 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.455 * Looking for test storage... 00:05:56.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:56.456 14:45:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.456 14:45:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61841 00:05:56.456 14:45:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.456 14:45:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61841 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61841 ']' 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.456 14:45:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.456 [2024-07-12 14:45:35.002071] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:56.456 [2024-07-12 14:45:35.002389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:05:56.714 [2024-07-12 14:45:35.142232] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.714 [2024-07-12 14:45:35.201869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.714 14:45:35 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.714 14:45:35 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.714 14:45:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:57.280 14:45:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61841 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61841 ']' 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61841 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61841 00:05:57.280 killing process with pid 61841 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61841' 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@967 -- # kill 61841 00:05:57.280 14:45:35 alias_rpc -- common/autotest_common.sh@972 -- # wait 61841 00:05:57.539 ************************************ 00:05:57.539 END TEST alias_rpc 00:05:57.539 ************************************ 00:05:57.539 00:05:57.539 real 0m1.084s 00:05:57.539 user 0m1.288s 00:05:57.539 sys 0m0.290s 00:05:57.539 14:45:35 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.539 14:45:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.539 14:45:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.539 14:45:35 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:57.539 14:45:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.539 14:45:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.539 14:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.539 14:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:57.539 ************************************ 00:05:57.539 START TEST dpdk_mem_utility 00:05:57.539 ************************************ 00:05:57.539 14:45:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.539 * Looking for test storage... 00:05:57.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.539 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.539 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61914 00:05:57.539 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.539 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61914 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61914 ']' 00:05:57.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.539 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.539 [2024-07-12 14:45:36.130171] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:57.539 [2024-07-12 14:45:36.131315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61914 ] 00:05:57.797 [2024-07-12 14:45:36.269161] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.797 [2024-07-12 14:45:36.341787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.056 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.056 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:58.056 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.056 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.056 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.056 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 { 00:05:58.056 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.056 } 00:05:58.056 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.056 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.056 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:58.056 1 heaps totaling size 814.000000 MiB 00:05:58.056 size: 814.000000 MiB heap id: 0 00:05:58.056 end heaps---------- 00:05:58.056 8 mempools totaling size 598.116089 MiB 00:05:58.056 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.056 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.056 size: 84.521057 MiB name: bdev_io_61914 00:05:58.056 size: 51.011292 MiB name: evtpool_61914 00:05:58.056 size: 50.003479 MiB name: msgpool_61914 00:05:58.056 size: 21.763794 MiB name: PDU_Pool 00:05:58.056 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.056 size: 0.026123 MiB name: Session_Pool 00:05:58.056 end mempools------- 00:05:58.056 6 memzones totaling size 4.142822 MiB 00:05:58.056 size: 1.000366 MiB name: RG_ring_0_61914 00:05:58.056 size: 1.000366 MiB name: RG_ring_1_61914 00:05:58.056 size: 1.000366 MiB name: RG_ring_4_61914 00:05:58.056 size: 1.000366 MiB name: RG_ring_5_61914 00:05:58.056 size: 0.125366 MiB name: RG_ring_2_61914 00:05:58.056 size: 0.015991 MiB name: RG_ring_3_61914 00:05:58.056 end memzones------- 00:05:58.056 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.056 heap id: 0 total size: 814.000000 MiB number of busy elements: 236 number of free elements: 15 00:05:58.056 list of free elements. size: 12.483643 MiB 00:05:58.056 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:58.056 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:58.056 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:58.056 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:58.057 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:58.057 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:58.057 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:58.057 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:58.057 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:58.057 element at address: 0x20001aa00000 with size: 0.571167 MiB 00:05:58.057 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:58.057 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:58.057 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:58.057 element at address: 0x200027e00000 with size: 0.397949 MiB 00:05:58.057 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:58.057 list of standard malloc elements. size: 199.253784 MiB 00:05:58.057 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:58.057 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:58.057 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:58.057 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:58.057 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:58.057 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:58.057 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:58.057 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:58.057 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:58.057 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:58.057 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:58.057 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:58.058 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:58.058 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:58.058 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:58.058 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:58.058 list of memzone associated elements. size: 602.262573 MiB 00:05:58.058 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:58.058 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.058 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:58.058 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.058 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:58.058 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61914_0 00:05:58.058 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:58.058 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61914_0 00:05:58.058 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:58.058 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61914_0 00:05:58.058 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:58.058 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.058 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:58.058 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.058 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:58.058 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61914 00:05:58.058 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:58.058 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61914 00:05:58.058 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:58.058 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61914 00:05:58.058 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:58.058 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.058 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:58.058 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.058 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:58.058 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.058 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:58.058 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.058 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:58.058 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61914 00:05:58.058 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:58.058 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61914 00:05:58.058 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:58.058 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61914 00:05:58.058 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:58.058 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61914 00:05:58.058 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:58.058 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61914 00:05:58.058 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:58.058 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.058 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:58.058 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.058 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:58.058 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.058 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:58.058 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61914 00:05:58.058 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:58.058 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.058 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:05:58.058 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.058 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:58.058 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61914 00:05:58.058 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:05:58.058 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.058 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:58.058 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61914 00:05:58.058 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:58.058 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61914 00:05:58.058 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:05:58.058 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.058 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.058 14:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61914 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61914 ']' 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61914 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61914 00:05:58.058 killing process with pid 61914 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61914' 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61914 00:05:58.058 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61914 00:05:58.316 00:05:58.316 real 0m0.980s 00:05:58.316 user 0m1.083s 00:05:58.316 sys 0m0.295s 00:05:58.316 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.316 ************************************ 00:05:58.316 END TEST dpdk_mem_utility 00:05:58.316 ************************************ 00:05:58.316 14:45:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.574 14:45:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.574 14:45:37 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.574 14:45:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.574 14:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.574 14:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.574 ************************************ 00:05:58.574 START TEST event 00:05:58.574 ************************************ 00:05:58.574 14:45:37 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.574 * Looking for test storage... 00:05:58.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.574 14:45:37 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:58.574 14:45:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.574 14:45:37 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.574 14:45:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:58.574 14:45:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.574 14:45:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.574 ************************************ 00:05:58.574 START TEST event_perf 00:05:58.574 ************************************ 00:05:58.574 14:45:37 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.575 Running I/O for 1 seconds...[2024-07-12 14:45:37.116676] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:05:58.575 [2024-07-12 14:45:37.116798] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61990 ] 00:05:58.833 [2024-07-12 14:45:37.255999] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.833 [2024-07-12 14:45:37.328560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.833 [2024-07-12 14:45:37.328644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.833 [2024-07-12 14:45:37.328714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.833 [2024-07-12 14:45:37.328718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.806 Running I/O for 1 seconds... 00:05:59.806 lcore 0: 183475 00:05:59.806 lcore 1: 183477 00:05:59.806 lcore 2: 183475 00:05:59.806 lcore 3: 183475 00:05:59.806 done. 00:05:59.806 00:05:59.806 real 0m1.308s 00:05:59.806 user 0m4.131s 00:05:59.806 sys 0m0.052s 00:05:59.806 14:45:38 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.806 14:45:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.806 ************************************ 00:05:59.806 END TEST event_perf 00:05:59.806 ************************************ 00:05:59.806 14:45:38 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.806 14:45:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.806 14:45:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:59.806 14:45:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.806 14:45:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.806 ************************************ 00:05:59.806 START TEST event_reactor 00:05:59.806 ************************************ 00:05:59.806 14:45:38 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.065 [2024-07-12 14:45:38.473139] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:00.065 [2024-07-12 14:45:38.473426] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:06:00.065 [2024-07-12 14:45:38.604660] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.065 [2024-07-12 14:45:38.676511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.440 test_start 00:06:01.440 oneshot 00:06:01.440 tick 100 00:06:01.440 tick 100 00:06:01.440 tick 250 00:06:01.440 tick 100 00:06:01.440 tick 100 00:06:01.440 tick 100 00:06:01.440 tick 250 00:06:01.440 tick 500 00:06:01.440 tick 100 00:06:01.440 tick 100 00:06:01.440 tick 250 00:06:01.440 tick 100 00:06:01.440 tick 100 00:06:01.440 test_end 00:06:01.440 ************************************ 00:06:01.440 END TEST event_reactor 00:06:01.440 ************************************ 00:06:01.440 00:06:01.440 real 0m1.291s 00:06:01.440 user 0m1.146s 00:06:01.440 sys 0m0.039s 00:06:01.440 14:45:39 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.440 14:45:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.440 14:45:39 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.440 14:45:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.440 14:45:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.440 14:45:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.440 14:45:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.440 ************************************ 00:06:01.440 START TEST event_reactor_perf 00:06:01.440 ************************************ 00:06:01.440 14:45:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.440 [2024-07-12 14:45:39.812453] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:01.440 [2024-07-12 14:45:39.812562] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:06:01.440 [2024-07-12 14:45:39.949090] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.440 [2024-07-12 14:45:40.010576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.817 test_start 00:06:02.817 test_end 00:06:02.817 Performance: 358700 events per second 00:06:02.817 ************************************ 00:06:02.817 END TEST event_reactor_perf 00:06:02.817 ************************************ 00:06:02.817 00:06:02.817 real 0m1.286s 00:06:02.817 user 0m1.140s 00:06:02.817 sys 0m0.038s 00:06:02.817 14:45:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.817 14:45:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 14:45:41 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.817 14:45:41 event -- event/event.sh@49 -- # uname -s 00:06:02.817 14:45:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.817 14:45:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.817 14:45:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.817 14:45:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.817 14:45:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 ************************************ 00:06:02.817 START TEST event_scheduler 00:06:02.817 ************************************ 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.817 * Looking for test storage... 00:06:02.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:02.817 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.817 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62120 00:06:02.817 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.817 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62120 00:06:02.817 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62120 ']' 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.817 14:45:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.817 [2024-07-12 14:45:41.257630] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:02.817 [2024-07-12 14:45:41.257924] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62120 ] 00:06:02.817 [2024-07-12 14:45:41.393928] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.076 [2024-07-12 14:45:41.501568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.076 [2024-07-12 14:45:41.501711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.076 [2024-07-12 14:45:41.501810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.076 [2024-07-12 14:45:41.501820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:03.076 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.076 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.076 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.076 POWER: Cannot set governor of lcore 0 to performance 00:06:03.076 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.076 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.076 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.076 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.076 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:03.076 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:03.076 POWER: Unable to set Power Management Environment for lcore 0 00:06:03.076 [2024-07-12 14:45:41.567977] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:03.076 [2024-07-12 14:45:41.568003] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:03.076 [2024-07-12 14:45:41.568023] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.076 [2024-07-12 14:45:41.568061] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.076 [2024-07-12 14:45:41.568078] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.076 [2024-07-12 14:45:41.568094] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 [2024-07-12 14:45:41.626457] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 ************************************ 00:06:03.076 START TEST scheduler_create_thread 00:06:03.076 ************************************ 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 2 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 3 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 4 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 5 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 6 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 7 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 8 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 9 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.076 10 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.076 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.335 14:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.710 14:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.710 14:45:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:04.710 14:45:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:04.710 14:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.710 14:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.666 ************************************ 00:06:05.666 END TEST scheduler_create_thread 00:06:05.666 ************************************ 00:06:05.666 14:45:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.666 00:06:05.666 real 0m2.612s 00:06:05.666 user 0m0.021s 00:06:05.666 sys 0m0.003s 00:06:05.666 14:45:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.666 14:45:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:05.666 14:45:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.666 14:45:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62120 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62120 ']' 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62120 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62120 00:06:05.666 killing process with pid 62120 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62120' 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62120 00:06:05.666 14:45:44 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62120 00:06:06.233 [2024-07-12 14:45:44.730256] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:06.492 00:06:06.492 real 0m3.769s 00:06:06.492 user 0m5.687s 00:06:06.492 sys 0m0.264s 00:06:06.492 14:45:44 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.492 ************************************ 00:06:06.492 END TEST event_scheduler 00:06:06.492 ************************************ 00:06:06.492 14:45:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.492 14:45:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.492 14:45:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:06.492 14:45:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:06.492 14:45:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.492 14:45:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.492 14:45:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.492 ************************************ 00:06:06.492 START TEST app_repeat 00:06:06.492 ************************************ 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:06.492 Process app_repeat pid: 62219 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62219 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62219' 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.492 spdk_app_start Round 0 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:06.492 14:45:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62219 /var/tmp/spdk-nbd.sock 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62219 ']' 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.492 14:45:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.492 [2024-07-12 14:45:44.983740] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:06.492 [2024-07-12 14:45:44.983829] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62219 ] 00:06:06.492 [2024-07-12 14:45:45.117012] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.750 [2024-07-12 14:45:45.176186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.750 [2024-07-12 14:45:45.176199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.750 14:45:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.750 14:45:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.750 14:45:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.009 Malloc0 00:06:07.009 14:45:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.268 Malloc1 00:06:07.268 14:45:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.268 14:45:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.527 /dev/nbd0 00:06:07.527 14:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.527 14:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.527 1+0 records in 00:06:07.527 1+0 records out 00:06:07.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198664 s, 20.6 MB/s 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.527 14:45:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.527 14:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.527 14:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.527 14:45:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.092 /dev/nbd1 00:06:08.092 14:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.093 1+0 records in 00:06:08.093 1+0 records out 00:06:08.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439883 s, 9.3 MB/s 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.093 14:45:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.093 14:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.351 { 00:06:08.351 "bdev_name": "Malloc0", 00:06:08.351 "nbd_device": "/dev/nbd0" 00:06:08.351 }, 00:06:08.351 { 00:06:08.351 "bdev_name": "Malloc1", 00:06:08.351 "nbd_device": "/dev/nbd1" 00:06:08.351 } 00:06:08.351 ]' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.351 { 00:06:08.351 "bdev_name": "Malloc0", 00:06:08.351 "nbd_device": "/dev/nbd0" 00:06:08.351 }, 00:06:08.351 { 00:06:08.351 "bdev_name": "Malloc1", 00:06:08.351 "nbd_device": "/dev/nbd1" 00:06:08.351 } 00:06:08.351 ]' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.351 /dev/nbd1' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.351 /dev/nbd1' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.351 256+0 records in 00:06:08.351 256+0 records out 00:06:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656814 s, 160 MB/s 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.351 256+0 records in 00:06:08.351 256+0 records out 00:06:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259433 s, 40.4 MB/s 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.351 256+0 records in 00:06:08.351 256+0 records out 00:06:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298218 s, 35.2 MB/s 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.351 14:45:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.608 14:45:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.865 14:45:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.122 14:45:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.122 14:45:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.122 14:45:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.122 14:45:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.122 14:45:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.381 14:45:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.381 14:45:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.638 14:45:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.638 [2024-07-12 14:45:48.249251] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.895 [2024-07-12 14:45:48.306083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.895 [2024-07-12 14:45:48.306093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.895 [2024-07-12 14:45:48.335292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.895 [2024-07-12 14:45:48.335347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.210 14:45:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.210 spdk_app_start Round 1 00:06:13.210 14:45:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.210 14:45:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62219 /var/tmp/spdk-nbd.sock 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62219 ']' 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.210 14:45:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:13.210 14:45:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.210 Malloc0 00:06:13.210 14:45:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.468 Malloc1 00:06:13.468 14:45:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.468 14:45:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.727 /dev/nbd0 00:06:13.727 14:45:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.727 14:45:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.727 14:45:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.985 1+0 records in 00:06:13.985 1+0 records out 00:06:13.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288295 s, 14.2 MB/s 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.985 14:45:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.985 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.985 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.985 14:45:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.243 /dev/nbd1 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.244 1+0 records in 00:06:14.244 1+0 records out 00:06:14.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275785 s, 14.9 MB/s 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.244 14:45:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.244 14:45:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.501 { 00:06:14.501 "bdev_name": "Malloc0", 00:06:14.501 "nbd_device": "/dev/nbd0" 00:06:14.501 }, 00:06:14.501 { 00:06:14.501 "bdev_name": "Malloc1", 00:06:14.501 "nbd_device": "/dev/nbd1" 00:06:14.501 } 00:06:14.501 ]' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.501 { 00:06:14.501 "bdev_name": "Malloc0", 00:06:14.501 "nbd_device": "/dev/nbd0" 00:06:14.501 }, 00:06:14.501 { 00:06:14.501 "bdev_name": "Malloc1", 00:06:14.501 "nbd_device": "/dev/nbd1" 00:06:14.501 } 00:06:14.501 ]' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.501 /dev/nbd1' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.501 /dev/nbd1' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.501 256+0 records in 00:06:14.501 256+0 records out 00:06:14.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00826764 s, 127 MB/s 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.501 256+0 records in 00:06:14.501 256+0 records out 00:06:14.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284678 s, 36.8 MB/s 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.501 14:45:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.758 256+0 records in 00:06:14.758 256+0 records out 00:06:14.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267707 s, 39.2 MB/s 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.758 14:45:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.016 14:45:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.274 14:45:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.532 14:45:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.532 14:45:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.099 14:45:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.099 [2024-07-12 14:45:54.611260] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.099 [2024-07-12 14:45:54.669797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.099 [2024-07-12 14:45:54.669807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.099 [2024-07-12 14:45:54.701316] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.099 [2024-07-12 14:45:54.701374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.380 14:45:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.380 spdk_app_start Round 2 00:06:19.380 14:45:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.380 14:45:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62219 /var/tmp/spdk-nbd.sock 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62219 ']' 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.380 14:45:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:19.380 14:45:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.380 Malloc0 00:06:19.639 14:45:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.639 Malloc1 00:06:19.898 14:45:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.898 /dev/nbd0 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.898 14:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.898 14:45:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.898 14:45:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.898 14:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.898 14:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.898 14:45:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.156 1+0 records in 00:06:20.156 1+0 records out 00:06:20.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212235 s, 19.3 MB/s 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.156 14:45:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.156 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.156 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.156 14:45:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.415 /dev/nbd1 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.415 1+0 records in 00:06:20.415 1+0 records out 00:06:20.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328142 s, 12.5 MB/s 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.415 14:45:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.415 14:45:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.673 { 00:06:20.673 "bdev_name": "Malloc0", 00:06:20.673 "nbd_device": "/dev/nbd0" 00:06:20.673 }, 00:06:20.673 { 00:06:20.673 "bdev_name": "Malloc1", 00:06:20.673 "nbd_device": "/dev/nbd1" 00:06:20.673 } 00:06:20.673 ]' 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.673 { 00:06:20.673 "bdev_name": "Malloc0", 00:06:20.673 "nbd_device": "/dev/nbd0" 00:06:20.673 }, 00:06:20.673 { 00:06:20.673 "bdev_name": "Malloc1", 00:06:20.673 "nbd_device": "/dev/nbd1" 00:06:20.673 } 00:06:20.673 ]' 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.673 /dev/nbd1' 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.673 /dev/nbd1' 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.673 14:45:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.674 256+0 records in 00:06:20.674 256+0 records out 00:06:20.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074863 s, 140 MB/s 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.674 256+0 records in 00:06:20.674 256+0 records out 00:06:20.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269024 s, 39.0 MB/s 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.674 256+0 records in 00:06:20.674 256+0 records out 00:06:20.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273054 s, 38.4 MB/s 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.674 14:45:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.240 14:45:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.808 14:46:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.808 14:46:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.066 14:46:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.066 [2024-07-12 14:46:00.664950] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.325 [2024-07-12 14:46:00.724141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.325 [2024-07-12 14:46:00.724150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.325 [2024-07-12 14:46:00.753728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.325 [2024-07-12 14:46:00.753782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.606 14:46:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62219 /var/tmp/spdk-nbd.sock 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62219 ']' 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.606 14:46:03 event.app_repeat -- event/event.sh@39 -- # killprocess 62219 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62219 ']' 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62219 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62219 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.606 killing process with pid 62219 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62219' 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62219 00:06:25.606 14:46:03 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62219 00:06:25.606 spdk_app_start is called in Round 0. 00:06:25.606 Shutdown signal received, stop current app iteration 00:06:25.606 Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 reinitialization... 00:06:25.606 spdk_app_start is called in Round 1. 00:06:25.606 Shutdown signal received, stop current app iteration 00:06:25.606 Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 reinitialization... 00:06:25.606 spdk_app_start is called in Round 2. 00:06:25.606 Shutdown signal received, stop current app iteration 00:06:25.606 Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 reinitialization... 00:06:25.606 spdk_app_start is called in Round 3. 00:06:25.606 Shutdown signal received, stop current app iteration 00:06:25.606 14:46:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.606 14:46:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.606 00:06:25.606 real 0m19.064s 00:06:25.606 user 0m43.619s 00:06:25.606 sys 0m2.903s 00:06:25.606 14:46:04 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.606 14:46:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.606 ************************************ 00:06:25.606 END TEST app_repeat 00:06:25.606 ************************************ 00:06:25.606 14:46:04 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.606 14:46:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.606 14:46:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.606 14:46:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.606 14:46:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.606 14:46:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.606 ************************************ 00:06:25.606 START TEST cpu_locks 00:06:25.606 ************************************ 00:06:25.606 14:46:04 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.606 * Looking for test storage... 00:06:25.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.606 14:46:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.606 14:46:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.606 14:46:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.606 14:46:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.606 14:46:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.606 14:46:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.606 14:46:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.606 ************************************ 00:06:25.606 START TEST default_locks 00:06:25.606 ************************************ 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62842 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62842 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62842 ']' 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.606 14:46:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.606 [2024-07-12 14:46:04.247939] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:25.606 [2024-07-12 14:46:04.248037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62842 ] 00:06:25.864 [2024-07-12 14:46:04.384642] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.864 [2024-07-12 14:46:04.447064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.793 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.793 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:26.793 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62842 00:06:26.793 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.793 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62842 00:06:27.049 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62842 00:06:27.049 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62842 ']' 00:06:27.049 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62842 00:06:27.049 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:27.049 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62842 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.050 killing process with pid 62842 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62842' 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62842 00:06:27.050 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62842 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62842 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62842 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62842 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62842 ']' 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.306 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62842) - No such process 00:06:27.306 ERROR: process (pid: 62842) is no longer running 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.306 00:06:27.306 real 0m1.807s 00:06:27.306 user 0m2.088s 00:06:27.306 sys 0m0.486s 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.306 14:46:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.306 ************************************ 00:06:27.306 END TEST default_locks 00:06:27.306 ************************************ 00:06:27.563 14:46:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.563 14:46:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.563 14:46:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.563 14:46:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.563 14:46:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 ************************************ 00:06:27.563 START TEST default_locks_via_rpc 00:06:27.563 ************************************ 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62895 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62895 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62895 ']' 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.563 14:46:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 [2024-07-12 14:46:06.070801] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:27.564 [2024-07-12 14:46:06.070904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62895 ] 00:06:27.564 [2024-07-12 14:46:06.205010] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.821 [2024-07-12 14:46:06.265753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62895 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62895 00:06:28.768 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62895 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62895 ']' 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62895 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62895 00:06:29.034 killing process with pid 62895 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62895' 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62895 00:06:29.034 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62895 00:06:29.293 ************************************ 00:06:29.293 END TEST default_locks_via_rpc 00:06:29.293 ************************************ 00:06:29.293 00:06:29.293 real 0m1.809s 00:06:29.293 user 0m2.104s 00:06:29.293 sys 0m0.481s 00:06:29.293 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.293 14:46:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.293 14:46:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:29.293 14:46:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.293 14:46:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.293 14:46:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.293 14:46:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.293 ************************************ 00:06:29.293 START TEST non_locking_app_on_locked_coremask 00:06:29.293 ************************************ 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:29.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62964 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62964 /var/tmp/spdk.sock 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62964 ']' 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.293 14:46:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.293 [2024-07-12 14:46:07.932696] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:29.293 [2024-07-12 14:46:07.932828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62964 ] 00:06:29.551 [2024-07-12 14:46:08.075270] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.551 [2024-07-12 14:46:08.144206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62984 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62984 /var/tmp/spdk2.sock 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62984 ']' 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.808 14:46:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.808 [2024-07-12 14:46:08.365305] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:29.808 [2024-07-12 14:46:08.365397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:06:30.067 [2024-07-12 14:46:08.509001] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.067 [2024-07-12 14:46:08.509061] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.067 [2024-07-12 14:46:08.629331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.001 14:46:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.001 14:46:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:31.001 14:46:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62964 00:06:31.001 14:46:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62964 00:06:31.001 14:46:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62964 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62964 ']' 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62964 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62964 00:06:31.935 killing process with pid 62964 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62964' 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62964 00:06:31.935 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62964 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62984 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62984 ']' 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62984 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62984 00:06:32.194 killing process with pid 62984 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62984' 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62984 00:06:32.194 14:46:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62984 00:06:32.452 ************************************ 00:06:32.452 END TEST non_locking_app_on_locked_coremask 00:06:32.452 ************************************ 00:06:32.452 00:06:32.453 real 0m3.222s 00:06:32.453 user 0m3.821s 00:06:32.453 sys 0m0.878s 00:06:32.453 14:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.453 14:46:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.711 14:46:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.711 14:46:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.711 14:46:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.711 14:46:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.711 14:46:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.711 ************************************ 00:06:32.711 START TEST locking_app_on_unlocked_coremask 00:06:32.711 ************************************ 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:32.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63058 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63058 /var/tmp/spdk.sock 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63058 ']' 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.711 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.711 [2024-07-12 14:46:11.180456] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:32.711 [2024-07-12 14:46:11.181395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63058 ] 00:06:32.711 [2024-07-12 14:46:11.318356] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.711 [2024-07-12 14:46:11.318729] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.969 [2024-07-12 14:46:11.424832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63072 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63072 /var/tmp/spdk2.sock 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63072 ']' 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.969 14:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.227 [2024-07-12 14:46:11.675540] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:33.227 [2024-07-12 14:46:11.675641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63072 ] 00:06:33.227 [2024-07-12 14:46:11.821260] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.485 [2024-07-12 14:46:11.943646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.052 14:46:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.052 14:46:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.052 14:46:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63072 00:06:34.052 14:46:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63072 00:06:34.052 14:46:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63058 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63058 ']' 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63058 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63058 00:06:35.023 killing process with pid 63058 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63058' 00:06:35.023 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63058 00:06:35.024 14:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63058 00:06:35.588 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63072 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63072 ']' 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63072 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63072 00:06:35.589 killing process with pid 63072 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63072' 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63072 00:06:35.589 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63072 00:06:35.847 00:06:35.847 real 0m3.269s 00:06:35.847 user 0m3.847s 00:06:35.847 sys 0m0.925s 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.847 ************************************ 00:06:35.847 END TEST locking_app_on_unlocked_coremask 00:06:35.847 ************************************ 00:06:35.847 14:46:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.847 14:46:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.847 14:46:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.847 14:46:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.847 14:46:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.847 ************************************ 00:06:35.847 START TEST locking_app_on_locked_coremask 00:06:35.847 ************************************ 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63150 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63150 /var/tmp/spdk.sock 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63150 ']' 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.847 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.847 [2024-07-12 14:46:14.487804] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:35.847 [2024-07-12 14:46:14.487898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:06:36.105 [2024-07-12 14:46:14.620795] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.105 [2024-07-12 14:46:14.682207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63160 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63160 /var/tmp/spdk2.sock 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63160 /var/tmp/spdk2.sock 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:36.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63160 /var/tmp/spdk2.sock 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63160 ']' 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.363 14:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.363 [2024-07-12 14:46:14.919863] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:36.363 [2024-07-12 14:46:14.919971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63160 ] 00:06:36.620 [2024-07-12 14:46:15.064763] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63150 has claimed it. 00:06:36.620 [2024-07-12 14:46:15.064847] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63160) - No such process 00:06:37.184 ERROR: process (pid: 63160) is no longer running 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63150 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.184 14:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63150 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63150 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63150 ']' 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63150 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63150 00:06:37.749 killing process with pid 63150 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63150' 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63150 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63150 00:06:37.749 ************************************ 00:06:37.749 END TEST locking_app_on_locked_coremask 00:06:37.749 ************************************ 00:06:37.749 00:06:37.749 real 0m1.968s 00:06:37.749 user 0m2.322s 00:06:37.749 sys 0m0.509s 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.749 14:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.006 14:46:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.006 14:46:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.006 14:46:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.006 14:46:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.006 14:46:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.006 ************************************ 00:06:38.006 START TEST locking_overlapped_coremask 00:06:38.006 ************************************ 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:38.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63212 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63212 /var/tmp/spdk.sock 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63212 ']' 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.006 14:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.006 [2024-07-12 14:46:16.506030] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:38.006 [2024-07-12 14:46:16.506127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:06:38.006 [2024-07-12 14:46:16.645960] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.263 [2024-07-12 14:46:16.718548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.263 [2024-07-12 14:46:16.718428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.263 [2024-07-12 14:46:16.718543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63242 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63242 /var/tmp/spdk2.sock 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63242 /var/tmp/spdk2.sock 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63242 /var/tmp/spdk2.sock 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63242 ']' 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.194 14:46:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.194 [2024-07-12 14:46:17.566683] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:39.194 [2024-07-12 14:46:17.566788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63242 ] 00:06:39.194 [2024-07-12 14:46:17.711449] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63212 has claimed it. 00:06:39.194 [2024-07-12 14:46:17.711537] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.760 ERROR: process (pid: 63242) is no longer running 00:06:39.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63242) - No such process 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63212 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63212 ']' 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63212 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63212 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63212' 00:06:39.760 killing process with pid 63212 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63212 00:06:39.760 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63212 00:06:40.018 00:06:40.018 real 0m2.135s 00:06:40.018 user 0m6.127s 00:06:40.018 sys 0m0.342s 00:06:40.018 ************************************ 00:06:40.018 END TEST locking_overlapped_coremask 00:06:40.018 ************************************ 00:06:40.018 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.018 14:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.018 14:46:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.018 14:46:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:40.018 14:46:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.018 14:46:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.018 14:46:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.018 ************************************ 00:06:40.018 START TEST locking_overlapped_coremask_via_rpc 00:06:40.018 ************************************ 00:06:40.018 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63292 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63292 /var/tmp/spdk.sock 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63292 ']' 00:06:40.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.019 14:46:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.277 [2024-07-12 14:46:18.696185] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:40.277 [2024-07-12 14:46:18.696280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63292 ] 00:06:40.277 [2024-07-12 14:46:18.827791] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.277 [2024-07-12 14:46:18.827877] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.277 [2024-07-12 14:46:18.889920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.277 [2024-07-12 14:46:18.890041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.277 [2024-07-12 14:46:18.890048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63323 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63323 /var/tmp/spdk2.sock 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63323 ']' 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.211 14:46:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.211 [2024-07-12 14:46:19.740087] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:41.211 [2024-07-12 14:46:19.740171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63323 ] 00:06:41.470 [2024-07-12 14:46:19.896281] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.470 [2024-07-12 14:46:19.896390] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.470 [2024-07-12 14:46:20.045370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.470 [2024-07-12 14:46:20.045433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.470 [2024-07-12 14:46:20.045436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.475 [2024-07-12 14:46:20.793686] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63292 has claimed it. 00:06:42.475 2024/07/12 14:46:20 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:42.475 request: 00:06:42.475 { 00:06:42.475 "method": "framework_enable_cpumask_locks", 00:06:42.475 "params": {} 00:06:42.475 } 00:06:42.475 Got JSON-RPC error response 00:06:42.475 GoRPCClient: error on JSON-RPC call 00:06:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63292 /var/tmp/spdk.sock 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63292 ']' 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.475 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.476 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.476 14:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63323 /var/tmp/spdk2.sock 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63323 ']' 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.733 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.991 ************************************ 00:06:42.991 END TEST locking_overlapped_coremask_via_rpc 00:06:42.991 ************************************ 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.991 00:06:42.991 real 0m2.844s 00:06:42.991 user 0m1.542s 00:06:42.991 sys 0m0.230s 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.991 14:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.991 14:46:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.991 14:46:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63292 ]] 00:06:42.991 14:46:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63292 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63292 ']' 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63292 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63292 00:06:42.991 killing process with pid 63292 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63292' 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63292 00:06:42.991 14:46:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63292 00:06:43.249 14:46:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63323 ]] 00:06:43.249 14:46:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63323 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63323 ']' 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63323 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63323 00:06:43.249 killing process with pid 63323 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63323' 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63323 00:06:43.249 14:46:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63323 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63292 ]] 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63292 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63292 ']' 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63292 00:06:43.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63292) - No such process 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63292 is not found' 00:06:43.507 Process with pid 63292 is not found 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63323 ]] 00:06:43.507 Process with pid 63323 is not found 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63323 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63323 ']' 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63323 00:06:43.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63323) - No such process 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63323 is not found' 00:06:43.507 14:46:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.507 ************************************ 00:06:43.507 END TEST cpu_locks 00:06:43.507 ************************************ 00:06:43.507 00:06:43.507 real 0m18.050s 00:06:43.507 user 0m34.692s 00:06:43.507 sys 0m4.450s 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.507 14:46:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.507 14:46:22 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.507 ************************************ 00:06:43.507 END TEST event 00:06:43.507 ************************************ 00:06:43.507 00:06:43.507 real 0m45.129s 00:06:43.507 user 1m30.540s 00:06:43.507 sys 0m7.961s 00:06:43.507 14:46:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.507 14:46:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.766 14:46:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.766 14:46:22 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.766 14:46:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.766 14:46:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.766 14:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.766 ************************************ 00:06:43.766 START TEST thread 00:06:43.766 ************************************ 00:06:43.766 14:46:22 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.766 * Looking for test storage... 00:06:43.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.766 14:46:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.766 14:46:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:43.766 14:46:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.766 14:46:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.766 ************************************ 00:06:43.766 START TEST thread_poller_perf 00:06:43.766 ************************************ 00:06:43.766 14:46:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.766 [2024-07-12 14:46:22.278362] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:43.766 [2024-07-12 14:46:22.278448] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:06:43.766 [2024-07-12 14:46:22.411998] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.025 [2024-07-12 14:46:22.472969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.025 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.961 ====================================== 00:06:44.961 busy:2211862709 (cyc) 00:06:44.961 total_run_count: 301000 00:06:44.961 tsc_hz: 2200000000 (cyc) 00:06:44.961 ====================================== 00:06:44.961 poller_cost: 7348 (cyc), 3340 (nsec) 00:06:44.961 00:06:44.961 real 0m1.296s 00:06:44.961 user 0m1.145s 00:06:44.961 sys 0m0.042s 00:06:44.961 14:46:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.961 14:46:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.961 ************************************ 00:06:44.961 END TEST thread_poller_perf 00:06:44.961 ************************************ 00:06:44.961 14:46:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:44.961 14:46:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.961 14:46:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:44.961 14:46:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.961 14:46:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.961 ************************************ 00:06:44.961 START TEST thread_poller_perf 00:06:44.961 ************************************ 00:06:44.961 14:46:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.220 [2024-07-12 14:46:23.620064] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:45.220 [2024-07-12 14:46:23.620152] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63500 ] 00:06:45.220 [2024-07-12 14:46:23.759257] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.220 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.220 [2024-07-12 14:46:23.833210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.593 ====================================== 00:06:46.593 busy:2202550162 (cyc) 00:06:46.593 total_run_count: 3744000 00:06:46.593 tsc_hz: 2200000000 (cyc) 00:06:46.593 ====================================== 00:06:46.593 poller_cost: 588 (cyc), 267 (nsec) 00:06:46.593 ************************************ 00:06:46.593 END TEST thread_poller_perf 00:06:46.593 ************************************ 00:06:46.593 00:06:46.593 real 0m1.305s 00:06:46.593 user 0m1.151s 00:06:46.593 sys 0m0.045s 00:06:46.593 14:46:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.593 14:46:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.593 14:46:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:46.593 14:46:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.593 ************************************ 00:06:46.593 END TEST thread 00:06:46.593 ************************************ 00:06:46.593 00:06:46.593 real 0m2.764s 00:06:46.593 user 0m2.351s 00:06:46.593 sys 0m0.189s 00:06:46.593 14:46:24 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.593 14:46:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.593 14:46:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.593 14:46:24 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:46.593 14:46:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.593 14:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.593 14:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:46.593 ************************************ 00:06:46.593 START TEST accel 00:06:46.593 ************************************ 00:06:46.593 14:46:24 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:46.593 * Looking for test storage... 00:06:46.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:46.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.593 14:46:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:46.593 14:46:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:46.593 14:46:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.593 14:46:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63574 00:06:46.593 14:46:25 accel -- accel/accel.sh@63 -- # waitforlisten 63574 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@829 -- # '[' -z 63574 ']' 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.593 14:46:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.593 14:46:25 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:46.593 14:46:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:46.593 14:46:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.593 14:46:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.593 14:46:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.593 14:46:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.593 14:46:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.593 14:46:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:46.593 14:46:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:46.593 [2024-07-12 14:46:25.160286] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:46.593 [2024-07-12 14:46:25.160413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63574 ] 00:06:46.851 [2024-07-12 14:46:25.302827] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.851 [2024-07-12 14:46:25.363461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.109 14:46:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.109 14:46:25 accel -- common/autotest_common.sh@862 -- # return 0 00:06:47.109 14:46:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:47.109 14:46:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:47.109 14:46:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:47.109 14:46:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:47.109 14:46:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:47.109 14:46:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:47.109 14:46:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.109 14:46:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.109 14:46:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:47.109 14:46:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.109 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.109 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.109 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.109 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.109 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.109 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.110 14:46:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.110 14:46:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.110 14:46:25 accel -- accel/accel.sh@75 -- # killprocess 63574 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@948 -- # '[' -z 63574 ']' 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@952 -- # kill -0 63574 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@953 -- # uname 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63574 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63574' 00:06:47.110 killing process with pid 63574 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@967 -- # kill 63574 00:06:47.110 14:46:25 accel -- common/autotest_common.sh@972 -- # wait 63574 00:06:47.368 14:46:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:47.368 14:46:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 14:46:25 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:47.368 14:46:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:47.368 14:46:25 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.368 14:46:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.368 14:46:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.368 14:46:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 ************************************ 00:06:47.368 START TEST accel_missing_filename 00:06:47.368 ************************************ 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.368 14:46:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:47.368 14:46:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:47.368 [2024-07-12 14:46:25.974548] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:47.368 [2024-07-12 14:46:25.974657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:06:47.626 [2024-07-12 14:46:26.111628] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.626 [2024-07-12 14:46:26.189376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.626 [2024-07-12 14:46:26.222873] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.626 [2024-07-12 14:46:26.265648] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:47.884 A filename is required. 00:06:47.884 ************************************ 00:06:47.884 END TEST accel_missing_filename 00:06:47.884 ************************************ 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.884 00:06:47.884 real 0m0.399s 00:06:47.884 user 0m0.262s 00:06:47.884 sys 0m0.080s 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.884 14:46:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:47.884 14:46:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.884 14:46:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.884 14:46:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:47.884 14:46:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.884 14:46:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.884 ************************************ 00:06:47.884 START TEST accel_compress_verify 00:06:47.884 ************************************ 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.884 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:47.884 14:46:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:47.884 [2024-07-12 14:46:26.415130] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:47.884 [2024-07-12 14:46:26.415232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:06:48.142 [2024-07-12 14:46:26.553584] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.142 [2024-07-12 14:46:26.627153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.142 [2024-07-12 14:46:26.661984] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.142 [2024-07-12 14:46:26.707222] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.142 00:06:48.142 Compression does not support the verify option, aborting. 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.142 00:06:48.142 real 0m0.401s 00:06:48.142 user 0m0.263s 00:06:48.142 sys 0m0.080s 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.142 ************************************ 00:06:48.142 END TEST accel_compress_verify 00:06:48.142 ************************************ 00:06:48.142 14:46:26 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.475 14:46:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 ************************************ 00:06:48.475 START TEST accel_wrong_workload 00:06:48.475 ************************************ 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:48.475 14:46:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:48.475 Unsupported workload type: foobar 00:06:48.475 [2024-07-12 14:46:26.854342] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.475 accel_perf options: 00:06:48.475 [-h help message] 00:06:48.475 [-q queue depth per core] 00:06:48.475 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.475 [-T number of threads per core 00:06:48.475 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.475 [-t time in seconds] 00:06:48.475 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.475 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.475 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.475 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.475 [-S for crc32c workload, use this seed value (default 0) 00:06:48.475 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.475 [-f for fill workload, use this BYTE value (default 255) 00:06:48.475 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.475 [-y verify result if this switch is on] 00:06:48.475 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.475 Can be used to spread operations across a wider range of memory. 00:06:48.475 ************************************ 00:06:48.475 END TEST accel_wrong_workload 00:06:48.475 ************************************ 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.475 00:06:48.475 real 0m0.032s 00:06:48.475 user 0m0.023s 00:06:48.475 sys 0m0.009s 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.475 14:46:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.475 14:46:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.475 14:46:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.475 ************************************ 00:06:48.475 START TEST accel_negative_buffers 00:06:48.475 ************************************ 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:48.476 14:46:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:48.476 -x option must be non-negative. 00:06:48.476 [2024-07-12 14:46:26.936085] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.476 accel_perf options: 00:06:48.476 [-h help message] 00:06:48.476 [-q queue depth per core] 00:06:48.476 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.476 [-T number of threads per core 00:06:48.476 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.476 [-t time in seconds] 00:06:48.476 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.476 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.476 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.476 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.476 [-S for crc32c workload, use this seed value (default 0) 00:06:48.476 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.476 [-f for fill workload, use this BYTE value (default 255) 00:06:48.476 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.476 [-y verify result if this switch is on] 00:06:48.476 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.476 Can be used to spread operations across a wider range of memory. 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.476 ************************************ 00:06:48.476 END TEST accel_negative_buffers 00:06:48.476 ************************************ 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.476 00:06:48.476 real 0m0.041s 00:06:48.476 user 0m0.026s 00:06:48.476 sys 0m0.014s 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.476 14:46:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:48.476 14:46:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.476 14:46:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.476 14:46:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:48.476 14:46:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.476 14:46:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.476 ************************************ 00:06:48.476 START TEST accel_crc32c 00:06:48.476 ************************************ 00:06:48.476 14:46:26 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.476 14:46:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.476 [2024-07-12 14:46:27.001475] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:48.476 [2024-07-12 14:46:27.001604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63713 ] 00:06:48.754 [2024-07-12 14:46:27.136196] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.754 [2024-07-12 14:46:27.195127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.754 14:46:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:49.689 14:46:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.689 00:06:49.689 real 0m1.358s 00:06:49.689 user 0m1.193s 00:06:49.689 sys 0m0.068s 00:06:49.689 14:46:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.689 14:46:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:49.689 ************************************ 00:06:49.689 END TEST accel_crc32c 00:06:49.689 ************************************ 00:06:49.947 14:46:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.948 14:46:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:49.948 14:46:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.948 14:46:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.948 14:46:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.948 ************************************ 00:06:49.948 START TEST accel_crc32c_C2 00:06:49.948 ************************************ 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.948 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:49.948 [2024-07-12 14:46:28.409818] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:49.948 [2024-07-12 14:46:28.409930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63742 ] 00:06:49.948 [2024-07-12 14:46:28.548716] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.205 [2024-07-12 14:46:28.623247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.205 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.206 14:46:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.140 00:06:51.140 real 0m1.397s 00:06:51.140 user 0m1.217s 00:06:51.140 sys 0m0.082s 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.140 14:46:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.140 ************************************ 00:06:51.140 END TEST accel_crc32c_C2 00:06:51.140 ************************************ 00:06:51.399 14:46:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.399 14:46:29 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.399 14:46:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.399 14:46:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.399 14:46:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.399 ************************************ 00:06:51.399 START TEST accel_copy 00:06:51.399 ************************************ 00:06:51.399 14:46:29 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:51.399 14:46:29 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:51.399 [2024-07-12 14:46:29.856838] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:51.399 [2024-07-12 14:46:29.857650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63777 ] 00:06:51.399 [2024-07-12 14:46:29.995428] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.659 [2024-07-12 14:46:30.063981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.659 14:46:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.593 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 ************************************ 00:06:52.594 END TEST accel_copy 00:06:52.594 ************************************ 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:52.594 14:46:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.594 00:06:52.594 real 0m1.383s 00:06:52.594 user 0m1.201s 00:06:52.594 sys 0m0.083s 00:06:52.594 14:46:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.594 14:46:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.851 14:46:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.851 14:46:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.851 14:46:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:52.852 14:46:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.852 14:46:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.852 ************************************ 00:06:52.852 START TEST accel_fill 00:06:52.852 ************************************ 00:06:52.852 14:46:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:52.852 14:46:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:52.852 [2024-07-12 14:46:31.286824] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:52.852 [2024-07-12 14:46:31.286921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63811 ] 00:06:52.852 [2024-07-12 14:46:31.426406] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.852 [2024-07-12 14:46:31.486535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.111 14:46:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.046 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:54.047 14:46:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.047 00:06:54.047 real 0m1.370s 00:06:54.047 user 0m1.204s 00:06:54.047 sys 0m0.071s 00:06:54.047 14:46:32 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.047 ************************************ 00:06:54.047 END TEST accel_fill 00:06:54.047 ************************************ 00:06:54.047 14:46:32 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:54.047 14:46:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.047 14:46:32 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:54.047 14:46:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.047 14:46:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.047 14:46:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.047 ************************************ 00:06:54.047 START TEST accel_copy_crc32c 00:06:54.047 ************************************ 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:54.047 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:54.306 [2024-07-12 14:46:32.701424] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:54.306 [2024-07-12 14:46:32.701539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63846 ] 00:06:54.306 [2024-07-12 14:46:32.842779] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.306 [2024-07-12 14:46:32.913259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.306 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.564 14:46:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 ************************************ 00:06:55.512 END TEST accel_copy_crc32c 00:06:55.512 ************************************ 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.512 00:06:55.512 real 0m1.389s 00:06:55.512 user 0m1.205s 00:06:55.512 sys 0m0.091s 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.512 14:46:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:55.512 14:46:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.512 14:46:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.512 14:46:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:55.512 14:46:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.512 14:46:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.512 ************************************ 00:06:55.512 START TEST accel_copy_crc32c_C2 00:06:55.512 ************************************ 00:06:55.512 14:46:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.512 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.512 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:55.512 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.512 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.513 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.513 [2024-07-12 14:46:34.133908] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:55.513 [2024-07-12 14:46:34.134026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63875 ] 00:06:55.795 [2024-07-12 14:46:34.273663] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.795 [2024-07-12 14:46:34.334694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.795 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.795 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.795 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.795 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.796 14:46:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.169 00:06:57.169 real 0m1.380s 00:06:57.169 user 0m1.210s 00:06:57.169 sys 0m0.077s 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.169 14:46:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:57.169 ************************************ 00:06:57.169 END TEST accel_copy_crc32c_C2 00:06:57.169 ************************************ 00:06:57.169 14:46:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.169 14:46:35 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:57.169 14:46:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.169 14:46:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.169 14:46:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.169 ************************************ 00:06:57.169 START TEST accel_dualcast 00:06:57.169 ************************************ 00:06:57.169 14:46:35 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:57.169 [2024-07-12 14:46:35.559489] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:57.169 [2024-07-12 14:46:35.559602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:06:57.169 [2024-07-12 14:46:35.693046] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.169 [2024-07-12 14:46:35.762683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.169 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.170 14:46:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:58.545 14:46:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.545 00:06:58.545 real 0m1.382s 00:06:58.545 user 0m1.204s 00:06:58.545 sys 0m0.081s 00:06:58.545 ************************************ 00:06:58.545 END TEST accel_dualcast 00:06:58.545 ************************************ 00:06:58.545 14:46:36 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.545 14:46:36 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:58.546 14:46:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.546 14:46:36 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.546 14:46:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.546 14:46:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.546 14:46:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.546 ************************************ 00:06:58.546 START TEST accel_compare 00:06:58.546 ************************************ 00:06:58.546 14:46:36 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:58.546 14:46:36 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:58.546 [2024-07-12 14:46:36.987458] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:58.546 [2024-07-12 14:46:36.987571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63944 ] 00:06:58.546 [2024-07-12 14:46:37.123403] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.546 [2024-07-12 14:46:37.196687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 14:46:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:59.739 14:46:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.739 00:06:59.739 real 0m1.389s 00:06:59.739 user 0m1.200s 00:06:59.739 sys 0m0.087s 00:06:59.739 ************************************ 00:06:59.739 END TEST accel_compare 00:06:59.739 ************************************ 00:06:59.739 14:46:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.739 14:46:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 14:46:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.739 14:46:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:59.739 14:46:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.739 14:46:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.739 14:46:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.998 ************************************ 00:06:59.998 START TEST accel_xor 00:06:59.998 ************************************ 00:06:59.998 14:46:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:59.998 14:46:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:59.998 [2024-07-12 14:46:38.421770] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:06:59.998 [2024-07-12 14:46:38.421897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63975 ] 00:06:59.998 [2024-07-12 14:46:38.565420] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.998 [2024-07-12 14:46:38.635612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.257 14:46:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.192 00:07:01.192 real 0m1.392s 00:07:01.192 user 0m1.215s 00:07:01.192 sys 0m0.081s 00:07:01.192 14:46:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.192 14:46:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:01.192 ************************************ 00:07:01.192 END TEST accel_xor 00:07:01.192 ************************************ 00:07:01.192 14:46:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.192 14:46:39 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:01.192 14:46:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:01.192 14:46:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.192 14:46:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.192 ************************************ 00:07:01.192 START TEST accel_xor 00:07:01.192 ************************************ 00:07:01.192 14:46:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:01.192 14:46:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:01.450 [2024-07-12 14:46:39.857122] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:01.450 [2024-07-12 14:46:39.857202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64015 ] 00:07:01.450 [2024-07-12 14:46:39.991523] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.450 [2024-07-12 14:46:40.050657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.450 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.451 14:46:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:02.826 14:46:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.826 00:07:02.826 real 0m1.363s 00:07:02.826 user 0m1.200s 00:07:02.826 sys 0m0.069s 00:07:02.826 14:46:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.826 14:46:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:02.826 ************************************ 00:07:02.826 END TEST accel_xor 00:07:02.826 ************************************ 00:07:02.826 14:46:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.826 14:46:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:02.826 14:46:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:02.826 14:46:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.826 14:46:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.826 ************************************ 00:07:02.826 START TEST accel_dif_verify 00:07:02.826 ************************************ 00:07:02.826 14:46:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:02.826 14:46:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:02.826 [2024-07-12 14:46:41.276956] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:02.826 [2024-07-12 14:46:41.277086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64044 ] 00:07:02.826 [2024-07-12 14:46:41.418730] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.084 [2024-07-12 14:46:41.488528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.084 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.084 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.084 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.084 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.084 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.085 14:46:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.019 ************************************ 00:07:04.019 END TEST accel_dif_verify 00:07:04.019 ************************************ 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:04.019 14:46:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.019 00:07:04.019 real 0m1.388s 00:07:04.019 user 0m0.012s 00:07:04.019 sys 0m0.003s 00:07:04.019 14:46:42 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.019 14:46:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:04.277 14:46:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.277 14:46:42 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:04.277 14:46:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:04.277 14:46:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.277 14:46:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.277 ************************************ 00:07:04.277 START TEST accel_dif_generate 00:07:04.277 ************************************ 00:07:04.277 14:46:42 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:04.277 14:46:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:04.277 [2024-07-12 14:46:42.705799] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:04.277 [2024-07-12 14:46:42.705900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64084 ] 00:07:04.277 [2024-07-12 14:46:42.847473] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.277 [2024-07-12 14:46:42.916940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.535 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.536 14:46:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:05.470 14:46:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.470 00:07:05.470 real 0m1.385s 00:07:05.470 user 0m1.218s 00:07:05.470 sys 0m0.074s 00:07:05.470 14:46:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.470 ************************************ 00:07:05.470 END TEST accel_dif_generate 00:07:05.470 ************************************ 00:07:05.470 14:46:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:05.470 14:46:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.470 14:46:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:05.470 14:46:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.470 14:46:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.470 14:46:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.470 ************************************ 00:07:05.470 START TEST accel_dif_generate_copy 00:07:05.470 ************************************ 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:05.470 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:05.728 [2024-07-12 14:46:44.136765] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:05.728 [2024-07-12 14:46:44.136859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64113 ] 00:07:05.728 [2024-07-12 14:46:44.277403] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.728 [2024-07-12 14:46:44.347171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.986 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.987 14:46:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.924 00:07:06.924 real 0m1.389s 00:07:06.924 user 0m0.015s 00:07:06.924 sys 0m0.003s 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.924 ************************************ 00:07:06.924 14:46:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.924 END TEST accel_dif_generate_copy 00:07:06.924 ************************************ 00:07:06.924 14:46:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.924 14:46:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:06.924 14:46:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.924 14:46:45 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:06.924 14:46:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.924 14:46:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.924 ************************************ 00:07:06.924 START TEST accel_comp 00:07:06.924 ************************************ 00:07:06.924 14:46:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:06.924 14:46:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:06.924 [2024-07-12 14:46:45.571265] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:06.924 [2024-07-12 14:46:45.571345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64144 ] 00:07:07.183 [2024-07-12 14:46:45.703178] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.183 [2024-07-12 14:46:45.772285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.183 14:46:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:08.622 14:46:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.622 00:07:08.622 real 0m1.376s 00:07:08.622 user 0m1.202s 00:07:08.622 sys 0m0.079s 00:07:08.622 ************************************ 00:07:08.622 END TEST accel_comp 00:07:08.622 ************************************ 00:07:08.622 14:46:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.622 14:46:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:08.622 14:46:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.622 14:46:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.622 14:46:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.622 14:46:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.622 14:46:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.622 ************************************ 00:07:08.622 START TEST accel_decomp 00:07:08.622 ************************************ 00:07:08.622 14:46:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:08.622 14:46:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:08.622 [2024-07-12 14:46:46.990762] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:08.622 [2024-07-12 14:46:46.990871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64184 ] 00:07:08.622 [2024-07-12 14:46:47.128479] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.622 [2024-07-12 14:46:47.205221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.622 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.623 14:46:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.623 14:46:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.623 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.623 14:46:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.991 14:46:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.991 ************************************ 00:07:09.991 END TEST accel_decomp 00:07:09.992 ************************************ 00:07:09.992 14:46:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.992 14:46:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.992 14:46:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.992 00:07:09.992 real 0m1.398s 00:07:09.992 user 0m1.222s 00:07:09.992 sys 0m0.079s 00:07:09.992 14:46:48 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.992 14:46:48 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:09.992 14:46:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.992 14:46:48 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.992 14:46:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:09.992 14:46:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.992 14:46:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.992 ************************************ 00:07:09.992 START TEST accel_decomp_full 00:07:09.992 ************************************ 00:07:09.992 14:46:48 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:09.992 14:46:48 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:09.992 [2024-07-12 14:46:48.427261] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:09.992 [2024-07-12 14:46:48.427396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64213 ] 00:07:09.992 [2024-07-12 14:46:48.567001] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.992 [2024-07-12 14:46:48.638700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.250 14:46:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.179 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.179 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.179 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.180 14:46:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.180 00:07:11.180 real 0m1.417s 00:07:11.180 user 0m1.237s 00:07:11.180 sys 0m0.083s 00:07:11.180 14:46:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.180 ************************************ 00:07:11.180 END TEST accel_decomp_full 00:07:11.180 ************************************ 00:07:11.180 14:46:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:11.437 14:46:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.437 14:46:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.437 14:46:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:11.437 14:46:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.437 14:46:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.437 ************************************ 00:07:11.437 START TEST accel_decomp_mcore 00:07:11.437 ************************************ 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.437 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:11.438 14:46:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:11.438 [2024-07-12 14:46:49.885426] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:11.438 [2024-07-12 14:46:49.885549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64248 ] 00:07:11.438 [2024-07-12 14:46:50.023084] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.438 [2024-07-12 14:46:50.086628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.438 [2024-07-12 14:46:50.086730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.438 [2024-07-12 14:46:50.086793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.438 [2024-07-12 14:46:50.086958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.695 14:46:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.629 00:07:12.629 real 0m1.391s 00:07:12.629 user 0m4.413s 00:07:12.629 sys 0m0.090s 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.629 14:46:51 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:12.629 ************************************ 00:07:12.629 END TEST accel_decomp_mcore 00:07:12.629 ************************************ 00:07:12.887 14:46:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.887 14:46:51 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.887 14:46:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:12.887 14:46:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.887 14:46:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 ************************************ 00:07:12.887 START TEST accel_decomp_full_mcore 00:07:12.888 ************************************ 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:12.888 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:12.888 [2024-07-12 14:46:51.327957] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:12.888 [2024-07-12 14:46:51.328077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64285 ] 00:07:12.888 [2024-07-12 14:46:51.464274] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.888 [2024-07-12 14:46:51.525200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.888 [2024-07-12 14:46:51.525260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.888 [2024-07-12 14:46:51.525338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.888 [2024-07-12 14:46:51.525345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.146 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.147 14:46:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.080 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.081 00:07:14.081 real 0m1.412s 00:07:14.081 user 0m0.012s 00:07:14.081 sys 0m0.004s 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.081 14:46:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:14.081 ************************************ 00:07:14.081 END TEST accel_decomp_full_mcore 00:07:14.081 ************************************ 00:07:14.338 14:46:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.338 14:46:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.338 14:46:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:14.338 14:46:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.338 14:46:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 ************************************ 00:07:14.338 START TEST accel_decomp_mthread 00:07:14.338 ************************************ 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:14.338 14:46:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:14.338 [2024-07-12 14:46:52.782416] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:14.338 [2024-07-12 14:46:52.782558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64317 ] 00:07:14.338 [2024-07-12 14:46:52.916692] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.607 [2024-07-12 14:46:52.999950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.607 14:46:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.546 00:07:15.546 real 0m1.396s 00:07:15.546 user 0m1.218s 00:07:15.546 sys 0m0.083s 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.546 14:46:54 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:15.546 ************************************ 00:07:15.546 END TEST accel_decomp_mthread 00:07:15.546 ************************************ 00:07:15.546 14:46:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.546 14:46:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.546 14:46:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:15.546 14:46:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.546 14:46:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 ************************************ 00:07:15.805 START TEST accel_decomp_full_mthread 00:07:15.805 ************************************ 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:15.805 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:15.805 [2024-07-12 14:46:54.226075] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:15.805 [2024-07-12 14:46:54.226191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64352 ] 00:07:15.805 [2024-07-12 14:46:54.396590] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.065 [2024-07-12 14:46:54.478312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.065 14:46:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.999 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.257 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.258 00:07:17.258 real 0m1.453s 00:07:17.258 user 0m1.282s 00:07:17.258 sys 0m0.078s 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.258 ************************************ 00:07:17.258 END TEST accel_decomp_full_mthread 00:07:17.258 ************************************ 00:07:17.258 14:46:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:17.258 14:46:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.258 14:46:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:17.258 14:46:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.258 14:46:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:17.258 14:46:55 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:17.258 14:46:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.258 14:46:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.258 14:46:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.258 14:46:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.258 14:46:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.258 14:46:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.258 14:46:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.258 14:46:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:17.258 14:46:55 accel -- accel/accel.sh@41 -- # jq -r . 00:07:17.258 ************************************ 00:07:17.258 START TEST accel_dif_functional_tests 00:07:17.258 ************************************ 00:07:17.258 14:46:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.258 [2024-07-12 14:46:55.759147] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:17.258 [2024-07-12 14:46:55.759243] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64387 ] 00:07:17.258 [2024-07-12 14:46:55.895132] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.516 [2024-07-12 14:46:55.960767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.516 [2024-07-12 14:46:55.960837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.516 [2024-07-12 14:46:55.960845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.516 00:07:17.516 00:07:17.516 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.516 http://cunit.sourceforge.net/ 00:07:17.516 00:07:17.516 00:07:17.516 Suite: accel_dif 00:07:17.516 Test: verify: DIF generated, GUARD check ...passed 00:07:17.516 Test: verify: DIF generated, APPTAG check ...passed 00:07:17.516 Test: verify: DIF generated, REFTAG check ...passed 00:07:17.516 Test: verify: DIF not generated, GUARD check ...passed 00:07:17.516 Test: verify: DIF not generated, APPTAG check ...passed 00:07:17.516 Test: verify: DIF not generated, REFTAG check ...passed 00:07:17.516 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:17.516 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:17.516 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:17.516 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:17.516 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-12 14:46:56.014798] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.516 [2024-07-12 14:46:56.014890] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.516 [2024-07-12 14:46:56.014932] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.516 [2024-07-12 14:46:56.015012] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:17.516 passed 00:07:17.516 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:17.516 Test: verify copy: DIF generated, GUARD check ...passed 00:07:17.516 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:17.516 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:17.516 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 14:46:56.015197] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:17.516 [2024-07-12 14:46:56.015393] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.516 passed 00:07:17.516 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:17.516 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:17.516 Test: generate copy: DIF generated, GUARD check ...passed 00:07:17.516 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:17.516 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:17.516 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:17.516 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-07-12 14:46:56.015441] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.517 [2024-07-12 14:46:56.015480] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.517 passed 00:07:17.517 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:17.517 Test: generate copy: iovecs-len validate ...passed 00:07:17.517 Test: generate copy: buffer alignment validate ...passed 00:07:17.517 00:07:17.517 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.517 suites 1 1 n/a 0 0 00:07:17.517 tests 26 26 26 0 0 00:07:17.517 asserts 115 115 115 0 n/a 00:07:17.517 00:07:17.517 Elapsed time = 0.002 seconds 00:07:17.517 [2024-07-12 14:46:56.015799] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:17.775 00:07:17.775 real 0m0.493s 00:07:17.775 user 0m0.576s 00:07:17.775 sys 0m0.099s 00:07:17.775 14:46:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.775 14:46:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:17.775 ************************************ 00:07:17.775 END TEST accel_dif_functional_tests 00:07:17.775 ************************************ 00:07:17.775 14:46:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.775 00:07:17.775 real 0m31.236s 00:07:17.775 user 0m33.399s 00:07:17.775 sys 0m2.836s 00:07:17.775 14:46:56 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.775 ************************************ 00:07:17.775 END TEST accel 00:07:17.775 ************************************ 00:07:17.775 14:46:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.775 14:46:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.775 14:46:56 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:17.775 14:46:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.775 14:46:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.775 14:46:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.775 ************************************ 00:07:17.775 START TEST accel_rpc 00:07:17.775 ************************************ 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:17.775 * Looking for test storage... 00:07:17.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:17.775 14:46:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.775 14:46:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64456 00:07:17.775 14:46:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:17.775 14:46:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64456 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64456 ']' 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.775 14:46:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.775 [2024-07-12 14:46:56.410301] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:17.775 [2024-07-12 14:46:56.411050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64456 ] 00:07:18.033 [2024-07-12 14:46:56.546128] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.033 [2024-07-12 14:46:56.624913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.968 14:46:57 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.968 14:46:57 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:18.968 14:46:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:18.968 14:46:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:18.968 14:46:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:18.968 14:46:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:18.968 14:46:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:18.968 14:46:57 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.968 14:46:57 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.968 14:46:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.968 ************************************ 00:07:18.968 START TEST accel_assign_opcode 00:07:18.968 ************************************ 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.968 [2024-07-12 14:46:57.433417] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.968 [2024-07-12 14:46:57.441409] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:18.968 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.226 software 00:07:19.226 00:07:19.226 real 0m0.214s 00:07:19.226 user 0m0.056s 00:07:19.226 sys 0m0.010s 00:07:19.226 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.226 14:46:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.226 ************************************ 00:07:19.226 END TEST accel_assign_opcode 00:07:19.226 ************************************ 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:19.226 14:46:57 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64456 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64456 ']' 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64456 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64456 00:07:19.226 killing process with pid 64456 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64456' 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@967 -- # kill 64456 00:07:19.226 14:46:57 accel_rpc -- common/autotest_common.sh@972 -- # wait 64456 00:07:19.483 00:07:19.483 real 0m1.710s 00:07:19.483 user 0m1.954s 00:07:19.483 sys 0m0.331s 00:07:19.483 14:46:57 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.483 14:46:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.483 ************************************ 00:07:19.483 END TEST accel_rpc 00:07:19.483 ************************************ 00:07:19.483 14:46:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:19.483 14:46:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:19.483 14:46:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.483 14:46:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.483 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:07:19.483 ************************************ 00:07:19.483 START TEST app_cmdline 00:07:19.483 ************************************ 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:19.483 * Looking for test storage... 00:07:19.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:19.483 14:46:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:19.483 14:46:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64564 00:07:19.483 14:46:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:19.483 14:46:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64564 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64564 ']' 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.483 14:46:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 [2024-07-12 14:46:58.153731] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:19.741 [2024-07-12 14:46:58.153822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64564 ] 00:07:19.741 [2024-07-12 14:46:58.289869] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.741 [2024-07-12 14:46:58.360408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.999 14:46:58 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.999 14:46:58 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:19.999 14:46:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:20.257 { 00:07:20.257 "fields": { 00:07:20.257 "commit": "7d88ad9b8", 00:07:20.257 "major": 24, 00:07:20.257 "minor": 9, 00:07:20.257 "patch": 0, 00:07:20.257 "suffix": "-pre" 00:07:20.257 }, 00:07:20.257 "version": "SPDK v24.09-pre git sha1 7d88ad9b8" 00:07:20.257 } 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:20.257 14:46:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:20.257 14:46:58 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.515 2024/07/12 14:46:59 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:20.515 request: 00:07:20.515 { 00:07:20.515 "method": "env_dpdk_get_mem_stats", 00:07:20.515 "params": {} 00:07:20.515 } 00:07:20.515 Got JSON-RPC error response 00:07:20.515 GoRPCClient: error on JSON-RPC call 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.515 14:46:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64564 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64564 ']' 00:07:20.515 14:46:59 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64564 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64564 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.516 killing process with pid 64564 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64564' 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@967 -- # kill 64564 00:07:20.516 14:46:59 app_cmdline -- common/autotest_common.sh@972 -- # wait 64564 00:07:20.774 00:07:20.774 real 0m1.369s 00:07:20.774 user 0m1.808s 00:07:20.774 sys 0m0.348s 00:07:20.774 14:46:59 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.774 14:46:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.774 ************************************ 00:07:20.774 END TEST app_cmdline 00:07:20.774 ************************************ 00:07:20.774 14:46:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.774 14:46:59 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:20.774 14:46:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.774 14:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.774 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 ************************************ 00:07:21.033 START TEST version 00:07:21.033 ************************************ 00:07:21.033 14:46:59 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.033 * Looking for test storage... 00:07:21.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:21.033 14:46:59 version -- app/version.sh@17 -- # get_header_version major 00:07:21.033 14:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.033 14:46:59 version -- app/version.sh@17 -- # major=24 00:07:21.033 14:46:59 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.033 14:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.033 14:46:59 version -- app/version.sh@18 -- # minor=9 00:07:21.033 14:46:59 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.033 14:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.033 14:46:59 version -- app/version.sh@19 -- # patch=0 00:07:21.033 14:46:59 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # cut -f2 00:07:21.033 14:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.033 14:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.033 14:46:59 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.033 14:46:59 version -- app/version.sh@22 -- # version=24.9 00:07:21.033 14:46:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.033 14:46:59 version -- app/version.sh@28 -- # version=24.9rc0 00:07:21.033 14:46:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:21.033 14:46:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.033 14:46:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:21.033 14:46:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:21.033 00:07:21.033 real 0m0.140s 00:07:21.033 user 0m0.076s 00:07:21.033 sys 0m0.089s 00:07:21.033 14:46:59 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.033 14:46:59 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 ************************************ 00:07:21.033 END TEST version 00:07:21.033 ************************************ 00:07:21.033 14:46:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.033 14:46:59 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@198 -- # uname -s 00:07:21.033 14:46:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:21.033 14:46:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.033 14:46:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.033 14:46:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:21.033 14:46:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.033 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 14:46:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.033 14:46:59 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.033 14:46:59 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.033 14:46:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.033 14:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.033 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 ************************************ 00:07:21.033 START TEST nvmf_tcp 00:07:21.033 ************************************ 00:07:21.033 14:46:59 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.292 * Looking for test storage... 00:07:21.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.292 14:46:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.292 14:46:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.292 14:46:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.292 14:46:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.292 14:46:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.292 14:46:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.292 14:46:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.292 14:46:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.293 14:46:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:21.293 14:46:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.293 14:46:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:21.293 14:46:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.293 14:46:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.293 14:46:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.293 14:46:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.293 ************************************ 00:07:21.293 START TEST nvmf_example 00:07:21.293 ************************************ 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.293 * Looking for test storage... 00:07:21.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:21.293 Cannot find device "nvmf_init_br" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:21.293 Cannot find device "nvmf_tgt_br" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.293 Cannot find device "nvmf_tgt_br2" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:21.293 Cannot find device "nvmf_init_br" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:21.293 Cannot find device "nvmf_tgt_br" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:21.293 Cannot find device "nvmf_tgt_br2" 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:21.293 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:21.551 Cannot find device "nvmf_br" 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:21.551 Cannot find device "nvmf_init_if" 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.551 14:46:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.551 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:21.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:07:21.810 00:07:21.810 --- 10.0.0.2 ping statistics --- 00:07:21.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.810 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:21.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:07:21.810 00:07:21.810 --- 10.0.0.3 ping statistics --- 00:07:21.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.810 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:21.810 00:07:21.810 --- 10.0.0.1 ping statistics --- 00:07:21.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.810 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64898 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64898 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64898 ']' 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.810 14:47:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.745 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:23.003 14:47:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.049 Initializing NVMe Controllers 00:07:33.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.049 Initialization complete. Launching workers. 00:07:33.049 ======================================================== 00:07:33.049 Latency(us) 00:07:33.049 Device Information : IOPS MiB/s Average min max 00:07:33.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14406.29 56.27 4442.11 866.10 20096.29 00:07:33.049 ======================================================== 00:07:33.049 Total : 14406.29 56.27 4442.11 866.10 20096.29 00:07:33.049 00:07:33.049 14:47:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:33.049 14:47:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:33.049 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.049 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.315 rmmod nvme_tcp 00:07:33.315 rmmod nvme_fabrics 00:07:33.315 rmmod nvme_keyring 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64898 ']' 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64898 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64898 ']' 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64898 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64898 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:33.315 killing process with pid 64898 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64898' 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64898 00:07:33.315 14:47:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64898 00:07:33.573 nvmf threads initialize successfully 00:07:33.573 bdev subsystem init successfully 00:07:33.573 created a nvmf target service 00:07:33.573 create targets's poll groups done 00:07:33.573 all subsystems of target started 00:07:33.573 nvmf target is running 00:07:33.573 all subsystems of target stopped 00:07:33.573 destroy targets's poll groups done 00:07:33.573 destroyed the nvmf target service 00:07:33.573 bdev subsystem finish successfully 00:07:33.573 nvmf threads destroy successfully 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.573 00:07:33.573 real 0m12.351s 00:07:33.573 user 0m44.406s 00:07:33.573 sys 0m1.952s 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.573 14:47:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.573 ************************************ 00:07:33.573 END TEST nvmf_example 00:07:33.573 ************************************ 00:07:33.573 14:47:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.573 14:47:12 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:33.573 14:47:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.573 14:47:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.573 14:47:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.573 ************************************ 00:07:33.573 START TEST nvmf_filesystem 00:07:33.573 ************************************ 00:07:33.573 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:33.833 * Looking for test storage... 00:07:33.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:33.833 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:33.834 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:33.834 #define SPDK_CONFIG_H 00:07:33.834 #define SPDK_CONFIG_APPS 1 00:07:33.834 #define SPDK_CONFIG_ARCH native 00:07:33.834 #undef SPDK_CONFIG_ASAN 00:07:33.834 #define SPDK_CONFIG_AVAHI 1 00:07:33.834 #undef SPDK_CONFIG_CET 00:07:33.834 #define SPDK_CONFIG_COVERAGE 1 00:07:33.834 #define SPDK_CONFIG_CROSS_PREFIX 00:07:33.834 #undef SPDK_CONFIG_CRYPTO 00:07:33.834 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:33.834 #undef SPDK_CONFIG_CUSTOMOCF 00:07:33.834 #undef SPDK_CONFIG_DAOS 00:07:33.834 #define SPDK_CONFIG_DAOS_DIR 00:07:33.835 #define SPDK_CONFIG_DEBUG 1 00:07:33.835 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:33.835 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:33.835 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:33.835 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:33.835 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:33.835 #undef SPDK_CONFIG_DPDK_UADK 00:07:33.835 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:33.835 #define SPDK_CONFIG_EXAMPLES 1 00:07:33.835 #undef SPDK_CONFIG_FC 00:07:33.835 #define SPDK_CONFIG_FC_PATH 00:07:33.835 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:33.835 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:33.835 #undef SPDK_CONFIG_FUSE 00:07:33.835 #undef SPDK_CONFIG_FUZZER 00:07:33.835 #define SPDK_CONFIG_FUZZER_LIB 00:07:33.835 #define SPDK_CONFIG_GOLANG 1 00:07:33.835 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:33.835 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:33.835 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:33.835 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:33.835 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:33.835 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:33.835 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:33.835 #define SPDK_CONFIG_IDXD 1 00:07:33.835 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:33.835 #undef SPDK_CONFIG_IPSEC_MB 00:07:33.835 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:33.835 #define SPDK_CONFIG_ISAL 1 00:07:33.835 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:33.835 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:33.835 #define SPDK_CONFIG_LIBDIR 00:07:33.835 #undef SPDK_CONFIG_LTO 00:07:33.835 #define SPDK_CONFIG_MAX_LCORES 128 00:07:33.835 #define SPDK_CONFIG_NVME_CUSE 1 00:07:33.835 #undef SPDK_CONFIG_OCF 00:07:33.835 #define SPDK_CONFIG_OCF_PATH 00:07:33.835 #define SPDK_CONFIG_OPENSSL_PATH 00:07:33.835 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:33.835 #define SPDK_CONFIG_PGO_DIR 00:07:33.835 #undef SPDK_CONFIG_PGO_USE 00:07:33.835 #define SPDK_CONFIG_PREFIX /usr/local 00:07:33.835 #undef SPDK_CONFIG_RAID5F 00:07:33.835 #undef SPDK_CONFIG_RBD 00:07:33.835 #define SPDK_CONFIG_RDMA 1 00:07:33.835 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:33.835 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:33.835 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:33.835 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:33.835 #define SPDK_CONFIG_SHARED 1 00:07:33.835 #undef SPDK_CONFIG_SMA 00:07:33.835 #define SPDK_CONFIG_TESTS 1 00:07:33.835 #undef SPDK_CONFIG_TSAN 00:07:33.835 #define SPDK_CONFIG_UBLK 1 00:07:33.835 #define SPDK_CONFIG_UBSAN 1 00:07:33.835 #undef SPDK_CONFIG_UNIT_TESTS 00:07:33.835 #undef SPDK_CONFIG_URING 00:07:33.835 #define SPDK_CONFIG_URING_PATH 00:07:33.835 #undef SPDK_CONFIG_URING_ZNS 00:07:33.835 #define SPDK_CONFIG_USDT 1 00:07:33.835 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:33.835 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:33.835 #undef SPDK_CONFIG_VFIO_USER 00:07:33.835 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:33.835 #define SPDK_CONFIG_VHOST 1 00:07:33.835 #define SPDK_CONFIG_VIRTIO 1 00:07:33.835 #undef SPDK_CONFIG_VTUNE 00:07:33.835 #define SPDK_CONFIG_VTUNE_DIR 00:07:33.835 #define SPDK_CONFIG_WERROR 1 00:07:33.835 #define SPDK_CONFIG_WPDK_DIR 00:07:33.835 #undef SPDK_CONFIG_XNVME 00:07:33.835 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:33.835 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:33.836 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65144 ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65144 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.VQUaea 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.VQUaea/tests/target /tmp/spdk.VQUaea 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785247744 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244440576 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785247744 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244440576 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora38-libvirt/output 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=94404186112 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5298593792 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:33.837 * Looking for test storage... 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:33.837 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13785247744 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.838 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:33.839 Cannot find device "nvmf_tgt_br" 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.839 Cannot find device "nvmf_tgt_br2" 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:33.839 Cannot find device "nvmf_tgt_br" 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:33.839 Cannot find device "nvmf_tgt_br2" 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:33.839 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.095 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:34.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:34.096 00:07:34.096 --- 10.0.0.2 ping statistics --- 00:07:34.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.096 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:34.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:34.096 00:07:34.096 --- 10.0.0.3 ping statistics --- 00:07:34.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.096 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:34.096 00:07:34.096 --- 10.0.0.1 ping statistics --- 00:07:34.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.096 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.096 ************************************ 00:07:34.096 START TEST nvmf_filesystem_no_in_capsule 00:07:34.096 ************************************ 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65306 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65306 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65306 ']' 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.096 14:47:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.352 [2024-07-12 14:47:12.789714] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:34.352 [2024-07-12 14:47:12.789799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.352 [2024-07-12 14:47:12.925037] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.352 [2024-07-12 14:47:12.998894] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.352 [2024-07-12 14:47:12.998983] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.352 [2024-07-12 14:47:12.998997] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.352 [2024-07-12 14:47:12.999010] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.352 [2024-07-12 14:47:12.999019] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.352 [2024-07-12 14:47:12.999161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.352 [2024-07-12 14:47:12.999621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.352 [2024-07-12 14:47:13.000079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.352 [2024-07-12 14:47:13.000118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 [2024-07-12 14:47:13.127172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.609 [2024-07-12 14:47:13.251855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.609 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.866 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:34.866 { 00:07:34.866 "aliases": [ 00:07:34.866 "d52b9da9-55bc-4bfb-bad9-6156507c8996" 00:07:34.866 ], 00:07:34.866 "assigned_rate_limits": { 00:07:34.866 "r_mbytes_per_sec": 0, 00:07:34.866 "rw_ios_per_sec": 0, 00:07:34.866 "rw_mbytes_per_sec": 0, 00:07:34.867 "w_mbytes_per_sec": 0 00:07:34.867 }, 00:07:34.867 "block_size": 512, 00:07:34.867 "claim_type": "exclusive_write", 00:07:34.867 "claimed": true, 00:07:34.867 "driver_specific": {}, 00:07:34.867 "memory_domains": [ 00:07:34.867 { 00:07:34.867 "dma_device_id": "system", 00:07:34.867 "dma_device_type": 1 00:07:34.867 }, 00:07:34.867 { 00:07:34.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.867 "dma_device_type": 2 00:07:34.867 } 00:07:34.867 ], 00:07:34.867 "name": "Malloc1", 00:07:34.867 "num_blocks": 1048576, 00:07:34.867 "product_name": "Malloc disk", 00:07:34.867 "supported_io_types": { 00:07:34.867 "abort": true, 00:07:34.867 "compare": false, 00:07:34.867 "compare_and_write": false, 00:07:34.867 "copy": true, 00:07:34.867 "flush": true, 00:07:34.867 "get_zone_info": false, 00:07:34.867 "nvme_admin": false, 00:07:34.867 "nvme_io": false, 00:07:34.867 "nvme_io_md": false, 00:07:34.867 "nvme_iov_md": false, 00:07:34.867 "read": true, 00:07:34.867 "reset": true, 00:07:34.867 "seek_data": false, 00:07:34.867 "seek_hole": false, 00:07:34.867 "unmap": true, 00:07:34.867 "write": true, 00:07:34.867 "write_zeroes": true, 00:07:34.867 "zcopy": true, 00:07:34.867 "zone_append": false, 00:07:34.867 "zone_management": false 00:07:34.867 }, 00:07:34.867 "uuid": "d52b9da9-55bc-4bfb-bad9-6156507c8996", 00:07:34.867 "zoned": false 00:07:34.867 } 00:07:34.867 ]' 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.867 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.124 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.124 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.124 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.124 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:35.124 14:47:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:37.022 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:37.279 14:47:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.213 ************************************ 00:07:38.213 START TEST filesystem_ext4 00:07:38.213 ************************************ 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:38.213 mke2fs 1.46.5 (30-Dec-2021) 00:07:38.213 Discarding device blocks: 0/522240 done 00:07:38.213 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:38.213 Filesystem UUID: 8bc31787-ec42-4f42-bebd-a8099885648e 00:07:38.213 Superblock backups stored on blocks: 00:07:38.213 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:38.213 00:07:38.213 Allocating group tables: 0/64 done 00:07:38.213 Writing inode tables: 0/64 done 00:07:38.213 Creating journal (8192 blocks): done 00:07:38.213 Writing superblocks and filesystem accounting information: 0/64 done 00:07:38.213 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:38.213 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.473 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.473 14:47:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65306 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.473 00:07:38.473 real 0m0.337s 00:07:38.473 user 0m0.024s 00:07:38.473 sys 0m0.047s 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:38.473 ************************************ 00:07:38.473 END TEST filesystem_ext4 00:07:38.473 ************************************ 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.473 ************************************ 00:07:38.473 START TEST filesystem_btrfs 00:07:38.473 ************************************ 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.473 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.732 btrfs-progs v6.6.2 00:07:38.732 See https://btrfs.readthedocs.io for more information. 00:07:38.732 00:07:38.732 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.732 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.732 this does not affect your deployments: 00:07:38.732 - DUP for metadata (-m dup) 00:07:38.732 - enabled no-holes (-O no-holes) 00:07:38.732 - enabled free-space-tree (-R free-space-tree) 00:07:38.732 00:07:38.732 Label: (null) 00:07:38.732 UUID: 3a6fb9a2-9576-484b-b287-b3d61c72d075 00:07:38.732 Node size: 16384 00:07:38.732 Sector size: 4096 00:07:38.732 Filesystem size: 510.00MiB 00:07:38.732 Block group profiles: 00:07:38.732 Data: single 8.00MiB 00:07:38.732 Metadata: DUP 32.00MiB 00:07:38.732 System: DUP 8.00MiB 00:07:38.732 SSD detected: yes 00:07:38.732 Zoned device: no 00:07:38.732 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.732 Runtime features: free-space-tree 00:07:38.732 Checksum: crc32c 00:07:38.732 Number of devices: 1 00:07:38.732 Devices: 00:07:38.732 ID SIZE PATH 00:07:38.732 1 510.00MiB /dev/nvme0n1p1 00:07:38.732 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65306 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.732 00:07:38.732 real 0m0.182s 00:07:38.732 user 0m0.022s 00:07:38.732 sys 0m0.056s 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.732 ************************************ 00:07:38.732 END TEST filesystem_btrfs 00:07:38.732 ************************************ 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.732 ************************************ 00:07:38.732 START TEST filesystem_xfs 00:07:38.732 ************************************ 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.732 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.733 14:47:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.991 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.991 = sectsz=512 attr=2, projid32bit=1 00:07:38.991 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.991 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.991 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.991 = sunit=0 swidth=0 blks 00:07:38.991 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.991 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.991 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.991 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.559 Discarding blocks...Done. 00:07:39.559 14:47:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.559 14:47:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.090 00:07:42.090 real 0m3.186s 00:07:42.090 user 0m0.016s 00:07:42.090 sys 0m0.051s 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 ************************************ 00:07:42.090 END TEST filesystem_xfs 00:07:42.090 ************************************ 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65306 ']' 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.090 killing process with pid 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65306' 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65306 00:07:42.090 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65306 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.348 00:07:42.348 real 0m8.218s 00:07:42.348 user 0m30.809s 00:07:42.348 sys 0m1.469s 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.348 ************************************ 00:07:42.348 END TEST nvmf_filesystem_no_in_capsule 00:07:42.348 ************************************ 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.348 14:47:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.606 ************************************ 00:07:42.606 START TEST nvmf_filesystem_in_capsule 00:07:42.606 ************************************ 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65599 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65599 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65599 ']' 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.606 14:47:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.606 [2024-07-12 14:47:21.058003] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:42.606 [2024-07-12 14:47:21.058104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.606 [2024-07-12 14:47:21.211704] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.865 [2024-07-12 14:47:21.294918] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.865 [2024-07-12 14:47:21.294987] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.865 [2024-07-12 14:47:21.295002] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.865 [2024-07-12 14:47:21.295016] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.865 [2024-07-12 14:47:21.295028] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.865 [2024-07-12 14:47:21.295210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.865 [2024-07-12 14:47:21.295263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.865 [2024-07-12 14:47:21.295914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.865 [2024-07-12 14:47:21.295922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 [2024-07-12 14:47:22.198022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 Malloc1 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.800 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 [2024-07-12 14:47:22.323079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:43.801 { 00:07:43.801 "aliases": [ 00:07:43.801 "10030f47-deb3-4a9d-ad57-a34b6c114fb5" 00:07:43.801 ], 00:07:43.801 "assigned_rate_limits": { 00:07:43.801 "r_mbytes_per_sec": 0, 00:07:43.801 "rw_ios_per_sec": 0, 00:07:43.801 "rw_mbytes_per_sec": 0, 00:07:43.801 "w_mbytes_per_sec": 0 00:07:43.801 }, 00:07:43.801 "block_size": 512, 00:07:43.801 "claim_type": "exclusive_write", 00:07:43.801 "claimed": true, 00:07:43.801 "driver_specific": {}, 00:07:43.801 "memory_domains": [ 00:07:43.801 { 00:07:43.801 "dma_device_id": "system", 00:07:43.801 "dma_device_type": 1 00:07:43.801 }, 00:07:43.801 { 00:07:43.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.801 "dma_device_type": 2 00:07:43.801 } 00:07:43.801 ], 00:07:43.801 "name": "Malloc1", 00:07:43.801 "num_blocks": 1048576, 00:07:43.801 "product_name": "Malloc disk", 00:07:43.801 "supported_io_types": { 00:07:43.801 "abort": true, 00:07:43.801 "compare": false, 00:07:43.801 "compare_and_write": false, 00:07:43.801 "copy": true, 00:07:43.801 "flush": true, 00:07:43.801 "get_zone_info": false, 00:07:43.801 "nvme_admin": false, 00:07:43.801 "nvme_io": false, 00:07:43.801 "nvme_io_md": false, 00:07:43.801 "nvme_iov_md": false, 00:07:43.801 "read": true, 00:07:43.801 "reset": true, 00:07:43.801 "seek_data": false, 00:07:43.801 "seek_hole": false, 00:07:43.801 "unmap": true, 00:07:43.801 "write": true, 00:07:43.801 "write_zeroes": true, 00:07:43.801 "zcopy": true, 00:07:43.801 "zone_append": false, 00:07:43.801 "zone_management": false 00:07:43.801 }, 00:07:43.801 "uuid": "10030f47-deb3-4a9d-ad57-a34b6c114fb5", 00:07:43.801 "zoned": false 00:07:43.801 } 00:07:43.801 ]' 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:43.801 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:44.060 14:47:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:46.593 14:47:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.160 ************************************ 00:07:47.160 START TEST filesystem_in_capsule_ext4 00:07:47.160 ************************************ 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:47.160 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:47.160 mke2fs 1.46.5 (30-Dec-2021) 00:07:47.418 Discarding device blocks: 0/522240 done 00:07:47.418 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.418 Filesystem UUID: 273fad65-df87-4366-bda9-72f0e8a9182f 00:07:47.418 Superblock backups stored on blocks: 00:07:47.418 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.418 00:07:47.418 Allocating group tables: 0/64 done 00:07:47.418 Writing inode tables: 0/64 done 00:07:47.418 Creating journal (8192 blocks): done 00:07:47.418 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.418 00:07:47.418 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:47.418 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.418 14:47:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65599 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.418 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.677 00:07:47.677 real 0m0.290s 00:07:47.677 user 0m0.012s 00:07:47.677 sys 0m0.052s 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.677 ************************************ 00:07:47.677 END TEST filesystem_in_capsule_ext4 00:07:47.677 ************************************ 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.677 ************************************ 00:07:47.677 START TEST filesystem_in_capsule_btrfs 00:07:47.677 ************************************ 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.677 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:47.677 btrfs-progs v6.6.2 00:07:47.677 See https://btrfs.readthedocs.io for more information. 00:07:47.677 00:07:47.677 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:47.677 NOTE: several default settings have changed in version 5.15, please make sure 00:07:47.677 this does not affect your deployments: 00:07:47.677 - DUP for metadata (-m dup) 00:07:47.677 - enabled no-holes (-O no-holes) 00:07:47.677 - enabled free-space-tree (-R free-space-tree) 00:07:47.677 00:07:47.677 Label: (null) 00:07:47.677 UUID: 8fc7a946-1e62-43b6-bd7e-3936be79c66e 00:07:47.678 Node size: 16384 00:07:47.678 Sector size: 4096 00:07:47.678 Filesystem size: 510.00MiB 00:07:47.678 Block group profiles: 00:07:47.678 Data: single 8.00MiB 00:07:47.678 Metadata: DUP 32.00MiB 00:07:47.678 System: DUP 8.00MiB 00:07:47.678 SSD detected: yes 00:07:47.678 Zoned device: no 00:07:47.678 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:47.678 Runtime features: free-space-tree 00:07:47.678 Checksum: crc32c 00:07:47.678 Number of devices: 1 00:07:47.678 Devices: 00:07:47.678 ID SIZE PATH 00:07:47.678 1 510.00MiB /dev/nvme0n1p1 00:07:47.678 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65599 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.678 00:07:47.678 real 0m0.185s 00:07:47.678 user 0m0.019s 00:07:47.678 sys 0m0.055s 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.678 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.678 ************************************ 00:07:47.678 END TEST filesystem_in_capsule_btrfs 00:07:47.678 ************************************ 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 ************************************ 00:07:47.936 START TEST filesystem_in_capsule_xfs 00:07:47.936 ************************************ 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.936 14:47:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:47.936 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:47.936 = sectsz=512 attr=2, projid32bit=1 00:07:47.936 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:47.936 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:47.936 data = bsize=4096 blocks=130560, imaxpct=25 00:07:47.936 = sunit=0 swidth=0 blks 00:07:47.936 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:47.936 log =internal log bsize=4096 blocks=16384, version=2 00:07:47.936 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:47.936 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.504 Discarding blocks...Done. 00:07:48.504 14:47:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.504 14:47:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65599 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.406 00:07:50.406 real 0m2.582s 00:07:50.406 user 0m0.020s 00:07:50.406 sys 0m0.055s 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.406 ************************************ 00:07:50.406 END TEST filesystem_in_capsule_xfs 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.406 ************************************ 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:50.406 14:47:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:50.406 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65599 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65599 ']' 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65599 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65599 00:07:50.665 killing process with pid 65599 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65599' 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65599 00:07:50.665 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65599 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:50.922 00:07:50.922 real 0m8.370s 00:07:50.922 user 0m31.734s 00:07:50.922 sys 0m1.414s 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.922 ************************************ 00:07:50.922 END TEST nvmf_filesystem_in_capsule 00:07:50.922 ************************************ 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.922 rmmod nvme_tcp 00:07:50.922 rmmod nvme_fabrics 00:07:50.922 rmmod nvme_keyring 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:50.922 00:07:50.922 real 0m17.359s 00:07:50.922 user 1m2.743s 00:07:50.922 sys 0m3.255s 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.922 14:47:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.922 ************************************ 00:07:50.922 END TEST nvmf_filesystem 00:07:50.922 ************************************ 00:07:50.922 14:47:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:50.922 14:47:29 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.922 14:47:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.922 14:47:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.922 14:47:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.922 ************************************ 00:07:50.922 START TEST nvmf_target_discovery 00:07:50.922 ************************************ 00:07:50.922 14:47:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.181 * Looking for test storage... 00:07:51.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.181 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:51.182 Cannot find device "nvmf_tgt_br" 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.182 Cannot find device "nvmf_tgt_br2" 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:51.182 Cannot find device "nvmf_tgt_br" 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:51.182 Cannot find device "nvmf_tgt_br2" 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.182 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:51.441 14:47:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:51.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:51.441 00:07:51.441 --- 10.0.0.2 ping statistics --- 00:07:51.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.441 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:51.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:51.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:07:51.441 00:07:51.441 --- 10.0.0.3 ping statistics --- 00:07:51.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.441 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:51.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:51.441 00:07:51.441 --- 10.0.0.1 ping statistics --- 00:07:51.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.441 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66047 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66047 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66047 ']' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.441 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.699 [2024-07-12 14:47:30.129885] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:51.700 [2024-07-12 14:47:30.129998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.700 [2024-07-12 14:47:30.266817] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.700 [2024-07-12 14:47:30.325238] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.700 [2024-07-12 14:47:30.325286] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.700 [2024-07-12 14:47:30.325297] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.700 [2024-07-12 14:47:30.325306] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.700 [2024-07-12 14:47:30.325313] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.700 [2024-07-12 14:47:30.325431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.700 [2024-07-12 14:47:30.325584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.700 [2024-07-12 14:47:30.325625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.700 [2024-07-12 14:47:30.325627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 [2024-07-12 14:47:30.451096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 Null1 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 [2024-07-12 14:47:30.513027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 Null2 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 Null3 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:51.958 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.959 Null4 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.959 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 4420 00:07:52.218 00:07:52.218 Discovery Log Number of Records 6, Generation counter 6 00:07:52.218 =====Discovery Log Entry 0====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: current discovery subsystem 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4420 00:07:52.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: explicit discovery connections, duplicate discovery information 00:07:52.218 sectype: none 00:07:52.218 =====Discovery Log Entry 1====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: nvme subsystem 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4420 00:07:52.218 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: none 00:07:52.218 sectype: none 00:07:52.218 =====Discovery Log Entry 2====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: nvme subsystem 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4420 00:07:52.218 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: none 00:07:52.218 sectype: none 00:07:52.218 =====Discovery Log Entry 3====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: nvme subsystem 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4420 00:07:52.218 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: none 00:07:52.218 sectype: none 00:07:52.218 =====Discovery Log Entry 4====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: nvme subsystem 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4420 00:07:52.218 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: none 00:07:52.218 sectype: none 00:07:52.218 =====Discovery Log Entry 5====== 00:07:52.218 trtype: tcp 00:07:52.218 adrfam: ipv4 00:07:52.218 subtype: discovery subsystem referral 00:07:52.218 treq: not required 00:07:52.218 portid: 0 00:07:52.218 trsvcid: 4430 00:07:52.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.218 traddr: 10.0.0.2 00:07:52.218 eflags: none 00:07:52.218 sectype: none 00:07:52.218 Perform nvmf subsystem discovery via RPC 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.218 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.218 [ 00:07:52.218 { 00:07:52.218 "allow_any_host": true, 00:07:52.218 "hosts": [], 00:07:52.218 "listen_addresses": [ 00:07:52.218 { 00:07:52.218 "adrfam": "IPv4", 00:07:52.218 "traddr": "10.0.0.2", 00:07:52.218 "trsvcid": "4420", 00:07:52.218 "trtype": "TCP" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:52.218 "subtype": "Discovery" 00:07:52.218 }, 00:07:52.218 { 00:07:52.218 "allow_any_host": true, 00:07:52.218 "hosts": [], 00:07:52.218 "listen_addresses": [ 00:07:52.218 { 00:07:52.218 "adrfam": "IPv4", 00:07:52.218 "traddr": "10.0.0.2", 00:07:52.218 "trsvcid": "4420", 00:07:52.218 "trtype": "TCP" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "max_cntlid": 65519, 00:07:52.218 "max_namespaces": 32, 00:07:52.218 "min_cntlid": 1, 00:07:52.218 "model_number": "SPDK bdev Controller", 00:07:52.218 "namespaces": [ 00:07:52.218 { 00:07:52.218 "bdev_name": "Null1", 00:07:52.218 "name": "Null1", 00:07:52.218 "nguid": "0AC147DCC48947D682737DCD0C22876E", 00:07:52.218 "nsid": 1, 00:07:52.218 "uuid": "0ac147dc-c489-47d6-8273-7dcd0c22876e" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.218 "serial_number": "SPDK00000000000001", 00:07:52.218 "subtype": "NVMe" 00:07:52.218 }, 00:07:52.218 { 00:07:52.218 "allow_any_host": true, 00:07:52.218 "hosts": [], 00:07:52.218 "listen_addresses": [ 00:07:52.218 { 00:07:52.218 "adrfam": "IPv4", 00:07:52.218 "traddr": "10.0.0.2", 00:07:52.218 "trsvcid": "4420", 00:07:52.218 "trtype": "TCP" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "max_cntlid": 65519, 00:07:52.218 "max_namespaces": 32, 00:07:52.218 "min_cntlid": 1, 00:07:52.218 "model_number": "SPDK bdev Controller", 00:07:52.218 "namespaces": [ 00:07:52.218 { 00:07:52.218 "bdev_name": "Null2", 00:07:52.218 "name": "Null2", 00:07:52.218 "nguid": "B8322C224F754762B3C1368EDDC0044E", 00:07:52.218 "nsid": 1, 00:07:52.218 "uuid": "b8322c22-4f75-4762-b3c1-368eddc0044e" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:52.218 "serial_number": "SPDK00000000000002", 00:07:52.218 "subtype": "NVMe" 00:07:52.218 }, 00:07:52.218 { 00:07:52.218 "allow_any_host": true, 00:07:52.218 "hosts": [], 00:07:52.218 "listen_addresses": [ 00:07:52.218 { 00:07:52.218 "adrfam": "IPv4", 00:07:52.218 "traddr": "10.0.0.2", 00:07:52.218 "trsvcid": "4420", 00:07:52.218 "trtype": "TCP" 00:07:52.218 } 00:07:52.218 ], 00:07:52.218 "max_cntlid": 65519, 00:07:52.218 "max_namespaces": 32, 00:07:52.218 "min_cntlid": 1, 00:07:52.219 "model_number": "SPDK bdev Controller", 00:07:52.219 "namespaces": [ 00:07:52.219 { 00:07:52.219 "bdev_name": "Null3", 00:07:52.219 "name": "Null3", 00:07:52.219 "nguid": "79E184D3BD014BFB8DDD1A9C96D141BE", 00:07:52.219 "nsid": 1, 00:07:52.219 "uuid": "79e184d3-bd01-4bfb-8ddd-1a9c96d141be" 00:07:52.219 } 00:07:52.219 ], 00:07:52.219 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:52.219 "serial_number": "SPDK00000000000003", 00:07:52.219 "subtype": "NVMe" 00:07:52.219 }, 00:07:52.219 { 00:07:52.219 "allow_any_host": true, 00:07:52.219 "hosts": [], 00:07:52.219 "listen_addresses": [ 00:07:52.219 { 00:07:52.219 "adrfam": "IPv4", 00:07:52.219 "traddr": "10.0.0.2", 00:07:52.219 "trsvcid": "4420", 00:07:52.219 "trtype": "TCP" 00:07:52.219 } 00:07:52.219 ], 00:07:52.219 "max_cntlid": 65519, 00:07:52.219 "max_namespaces": 32, 00:07:52.219 "min_cntlid": 1, 00:07:52.219 "model_number": "SPDK bdev Controller", 00:07:52.219 "namespaces": [ 00:07:52.219 { 00:07:52.219 "bdev_name": "Null4", 00:07:52.219 "name": "Null4", 00:07:52.219 "nguid": "70917AD01DCE482792B00849F6613E4F", 00:07:52.219 "nsid": 1, 00:07:52.219 "uuid": "70917ad0-1dce-4827-92b0-0849f6613e4f" 00:07:52.219 } 00:07:52.219 ], 00:07:52.219 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:52.219 "serial_number": "SPDK00000000000004", 00:07:52.219 "subtype": "NVMe" 00:07:52.219 } 00:07:52.219 ] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.219 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.506 rmmod nvme_tcp 00:07:52.506 rmmod nvme_fabrics 00:07:52.506 rmmod nvme_keyring 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66047 ']' 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66047 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66047 ']' 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66047 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66047 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66047' 00:07:52.506 killing process with pid 66047 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66047 00:07:52.506 14:47:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66047 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.506 14:47:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.798 14:47:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:52.798 00:07:52.798 real 0m1.581s 00:07:52.798 user 0m3.274s 00:07:52.798 sys 0m0.523s 00:07:52.798 14:47:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.798 ************************************ 00:07:52.798 END TEST nvmf_target_discovery 00:07:52.798 ************************************ 00:07:52.798 14:47:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.798 14:47:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:52.798 14:47:31 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.798 14:47:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.798 14:47:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.798 14:47:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.798 ************************************ 00:07:52.798 START TEST nvmf_referrals 00:07:52.798 ************************************ 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.798 * Looking for test storage... 00:07:52.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:52.798 Cannot find device "nvmf_tgt_br" 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.798 Cannot find device "nvmf_tgt_br2" 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:52.798 Cannot find device "nvmf_tgt_br" 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:52.798 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:52.799 Cannot find device "nvmf_tgt_br2" 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.799 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:07:53.057 00:07:53.057 --- 10.0.0.2 ping statistics --- 00:07:53.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.057 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:07:53.057 00:07:53.057 --- 10.0.0.3 ping statistics --- 00:07:53.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.057 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:07:53.057 00:07:53.057 --- 10.0.0.1 ping statistics --- 00:07:53.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.057 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66264 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66264 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66264 ']' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.057 14:47:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.057 [2024-07-12 14:47:31.706645] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:53.057 [2024-07-12 14:47:31.706764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.316 [2024-07-12 14:47:31.848065] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.316 [2024-07-12 14:47:31.936684] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.316 [2024-07-12 14:47:31.936761] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.316 [2024-07-12 14:47:31.936779] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.316 [2024-07-12 14:47:31.936793] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.316 [2024-07-12 14:47:31.936806] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.316 [2024-07-12 14:47:31.936950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.316 [2024-07-12 14:47:31.937032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.316 [2024-07-12 14:47:31.937541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.316 [2024-07-12 14:47:31.937551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 [2024-07-12 14:47:32.697818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 [2024-07-12 14:47:32.726027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.250 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.508 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.766 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:55.024 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:55.025 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:55.025 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:55.025 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.025 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:55.025 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.282 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.283 rmmod nvme_tcp 00:07:55.283 rmmod nvme_fabrics 00:07:55.283 rmmod nvme_keyring 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66264 ']' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66264 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66264 ']' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66264 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66264 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.283 killing process with pid 66264 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66264' 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66264 00:07:55.283 14:47:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66264 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:55.540 00:07:55.540 real 0m2.935s 00:07:55.540 user 0m9.542s 00:07:55.540 sys 0m0.738s 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.540 14:47:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.540 ************************************ 00:07:55.540 END TEST nvmf_referrals 00:07:55.540 ************************************ 00:07:55.540 14:47:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:55.540 14:47:34 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:55.540 14:47:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.540 14:47:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.540 14:47:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.540 ************************************ 00:07:55.540 START TEST nvmf_connect_disconnect 00:07:55.540 ************************************ 00:07:55.540 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:55.798 * Looking for test storage... 00:07:55.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.798 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:55.799 Cannot find device "nvmf_tgt_br" 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.799 Cannot find device "nvmf_tgt_br2" 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:55.799 Cannot find device "nvmf_tgt_br" 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:55.799 Cannot find device "nvmf_tgt_br2" 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.799 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:56.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:07:56.058 00:07:56.058 --- 10.0.0.2 ping statistics --- 00:07:56.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.058 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:56.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:07:56.058 00:07:56.058 --- 10.0.0.3 ping statistics --- 00:07:56.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.058 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:56.058 00:07:56.058 --- 10.0.0.1 ping statistics --- 00:07:56.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.058 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66565 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66565 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66565 ']' 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.058 14:47:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:56.058 [2024-07-12 14:47:34.634435] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:07:56.058 [2024-07-12 14:47:34.634540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.317 [2024-07-12 14:47:34.771326] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.317 [2024-07-12 14:47:34.843962] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.317 [2024-07-12 14:47:34.844260] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.317 [2024-07-12 14:47:34.844437] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.317 [2024-07-12 14:47:34.844649] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.317 [2024-07-12 14:47:34.844844] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.317 [2024-07-12 14:47:34.845068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.317 [2024-07-12 14:47:34.845147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.317 [2024-07-12 14:47:34.845581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.317 [2024-07-12 14:47:34.845593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 [2024-07-12 14:47:35.704983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.252 [2024-07-12 14:47:35.772254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:57.252 14:47:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:59.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.648 14:47:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.648 rmmod nvme_tcp 00:08:08.648 rmmod nvme_fabrics 00:08:08.648 rmmod nvme_keyring 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66565 ']' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66565 ']' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.648 killing process with pid 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66565' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66565 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.648 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.906 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:08.906 00:08:08.906 real 0m13.139s 00:08:08.906 user 0m48.445s 00:08:08.906 sys 0m1.892s 00:08:08.906 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.906 14:47:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 ************************************ 00:08:08.906 END TEST nvmf_connect_disconnect 00:08:08.906 ************************************ 00:08:08.906 14:47:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.906 14:47:47 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:08.906 14:47:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.906 14:47:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.906 14:47:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 ************************************ 00:08:08.906 START TEST nvmf_multitarget 00:08:08.906 ************************************ 00:08:08.906 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:08.906 * Looking for test storage... 00:08:08.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.906 14:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:08.907 Cannot find device "nvmf_tgt_br" 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.907 Cannot find device "nvmf_tgt_br2" 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:08.907 Cannot find device "nvmf_tgt_br" 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:08.907 Cannot find device "nvmf_tgt_br2" 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:08.907 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:09.166 00:08:09.166 --- 10.0.0.2 ping statistics --- 00:08:09.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.166 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:09.166 00:08:09.166 --- 10.0.0.3 ping statistics --- 00:08:09.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.166 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:09.166 00:08:09.166 --- 10.0.0.1 ping statistics --- 00:08:09.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.166 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66964 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66964 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 66964 ']' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.166 14:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:09.425 [2024-07-12 14:47:47.857399] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:08:09.425 [2024-07-12 14:47:47.857509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.425 [2024-07-12 14:47:47.993925] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.425 [2024-07-12 14:47:48.062142] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.425 [2024-07-12 14:47:48.062199] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.425 [2024-07-12 14:47:48.062211] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.425 [2024-07-12 14:47:48.062220] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.425 [2024-07-12 14:47:48.062228] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.425 [2024-07-12 14:47:48.062501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.425 [2024-07-12 14:47:48.062589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.425 [2024-07-12 14:47:48.062931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.425 [2024-07-12 14:47:48.062951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:10.358 14:47:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:10.616 "nvmf_tgt_1" 00:08:10.616 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:10.873 "nvmf_tgt_2" 00:08:10.873 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:10.873 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:10.873 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:10.873 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:11.130 true 00:08:11.130 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:11.130 true 00:08:11.130 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:11.130 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.387 rmmod nvme_tcp 00:08:11.387 rmmod nvme_fabrics 00:08:11.387 rmmod nvme_keyring 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66964 ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66964 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 66964 ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 66964 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66964 00:08:11.387 killing process with pid 66964 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66964' 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 66964 00:08:11.387 14:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 66964 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:11.645 00:08:11.645 real 0m2.854s 00:08:11.645 user 0m9.664s 00:08:11.645 sys 0m0.636s 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.645 14:47:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:11.645 ************************************ 00:08:11.645 END TEST nvmf_multitarget 00:08:11.645 ************************************ 00:08:11.645 14:47:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:11.645 14:47:50 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:11.645 14:47:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.645 14:47:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.645 14:47:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.645 ************************************ 00:08:11.645 START TEST nvmf_rpc 00:08:11.645 ************************************ 00:08:11.645 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:11.903 * Looking for test storage... 00:08:11.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.903 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:11.904 Cannot find device "nvmf_tgt_br" 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.904 Cannot find device "nvmf_tgt_br2" 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:11.904 Cannot find device "nvmf_tgt_br" 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:11.904 Cannot find device "nvmf_tgt_br2" 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.904 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:12.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:08:12.162 00:08:12.162 --- 10.0.0.2 ping statistics --- 00:08:12.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.162 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:12.162 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:12.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:12.162 00:08:12.162 --- 10.0.0.3 ping statistics --- 00:08:12.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.162 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:12.163 00:08:12.163 --- 10.0.0.1 ping statistics --- 00:08:12.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.163 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67191 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67191 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67191 ']' 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.163 14:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.163 [2024-07-12 14:47:50.759588] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:08:12.163 [2024-07-12 14:47:50.759686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.421 [2024-07-12 14:47:50.916408] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.421 [2024-07-12 14:47:50.975670] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.421 [2024-07-12 14:47:50.975721] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.421 [2024-07-12 14:47:50.975733] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.421 [2024-07-12 14:47:50.975741] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.421 [2024-07-12 14:47:50.975748] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.421 [2024-07-12 14:47:50.975863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.421 [2024-07-12 14:47:50.976623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.421 [2024-07-12 14:47:50.976703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.421 [2024-07-12 14:47:50.976717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:13.351 "poll_groups": [ 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.351 "current_io_qpairs": 0, 00:08:13.351 "io_qpairs": 0, 00:08:13.351 "name": "nvmf_tgt_poll_group_000", 00:08:13.351 "pending_bdev_io": 0, 00:08:13.351 "transports": [] 00:08:13.351 }, 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.351 "current_io_qpairs": 0, 00:08:13.351 "io_qpairs": 0, 00:08:13.351 "name": "nvmf_tgt_poll_group_001", 00:08:13.351 "pending_bdev_io": 0, 00:08:13.351 "transports": [] 00:08:13.351 }, 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.351 "current_io_qpairs": 0, 00:08:13.351 "io_qpairs": 0, 00:08:13.351 "name": "nvmf_tgt_poll_group_002", 00:08:13.351 "pending_bdev_io": 0, 00:08:13.351 "transports": [] 00:08:13.351 }, 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.351 "current_io_qpairs": 0, 00:08:13.351 "io_qpairs": 0, 00:08:13.351 "name": "nvmf_tgt_poll_group_003", 00:08:13.351 "pending_bdev_io": 0, 00:08:13.351 "transports": [] 00:08:13.351 } 00:08:13.351 ], 00:08:13.351 "tick_rate": 2200000000 00:08:13.351 }' 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 [2024-07-12 14:47:51.899635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.351 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:13.351 "poll_groups": [ 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.351 "current_io_qpairs": 0, 00:08:13.351 "io_qpairs": 0, 00:08:13.351 "name": "nvmf_tgt_poll_group_000", 00:08:13.351 "pending_bdev_io": 0, 00:08:13.351 "transports": [ 00:08:13.351 { 00:08:13.351 "trtype": "TCP" 00:08:13.351 } 00:08:13.351 ] 00:08:13.351 }, 00:08:13.351 { 00:08:13.351 "admin_qpairs": 0, 00:08:13.351 "completed_nvme_io": 0, 00:08:13.351 "current_admin_qpairs": 0, 00:08:13.352 "current_io_qpairs": 0, 00:08:13.352 "io_qpairs": 0, 00:08:13.352 "name": "nvmf_tgt_poll_group_001", 00:08:13.352 "pending_bdev_io": 0, 00:08:13.352 "transports": [ 00:08:13.352 { 00:08:13.352 "trtype": "TCP" 00:08:13.352 } 00:08:13.352 ] 00:08:13.352 }, 00:08:13.352 { 00:08:13.352 "admin_qpairs": 0, 00:08:13.352 "completed_nvme_io": 0, 00:08:13.352 "current_admin_qpairs": 0, 00:08:13.352 "current_io_qpairs": 0, 00:08:13.352 "io_qpairs": 0, 00:08:13.352 "name": "nvmf_tgt_poll_group_002", 00:08:13.352 "pending_bdev_io": 0, 00:08:13.352 "transports": [ 00:08:13.352 { 00:08:13.352 "trtype": "TCP" 00:08:13.352 } 00:08:13.352 ] 00:08:13.352 }, 00:08:13.352 { 00:08:13.352 "admin_qpairs": 0, 00:08:13.352 "completed_nvme_io": 0, 00:08:13.352 "current_admin_qpairs": 0, 00:08:13.352 "current_io_qpairs": 0, 00:08:13.352 "io_qpairs": 0, 00:08:13.352 "name": "nvmf_tgt_poll_group_003", 00:08:13.352 "pending_bdev_io": 0, 00:08:13.352 "transports": [ 00:08:13.352 { 00:08:13.352 "trtype": "TCP" 00:08:13.352 } 00:08:13.352 ] 00:08:13.352 } 00:08:13.352 ], 00:08:13.352 "tick_rate": 2200000000 00:08:13.352 }' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:13.352 14:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 Malloc1 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 [2024-07-12 14:47:52.074502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.2 -s 4420 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.2 -s 4420 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.2 -s 4420 00:08:13.609 [2024-07-12 14:47:52.096759] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c' 00:08:13.609 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:13.609 could not add new controller: failed to write to nvme-fabrics device 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.609 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.866 14:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.866 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.866 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.866 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:13.866 14:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:15.815 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.815 [2024-07-12 14:47:54.378097] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c' 00:08:15.815 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:15.815 could not add new controller: failed to write to nvme-fabrics device 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.816 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.073 14:47:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.073 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.073 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.073 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:16.073 14:47:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.969 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.227 [2024-07-12 14:47:56.661269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:18.227 14:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 [2024-07-12 14:47:58.960812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.797 14:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.797 14:47:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.797 14:47:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:20.797 14:47:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.797 14:47:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:20.797 14:47:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:22.712 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 [2024-07-12 14:48:01.240673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.713 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.970 14:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.970 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:22.970 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.970 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:22.970 14:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.869 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 [2024-07-12 14:48:03.531963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:25.127 14:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 [2024-07-12 14:48:05.923558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.666 14:48:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:27.666 14:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.666 14:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:27.666 14:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.666 14:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:27.666 14:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.567 [2024-07-12 14:48:08.215006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.567 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.826 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 [2024-07-12 14:48:08.263041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 [2024-07-12 14:48:08.311071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 [2024-07-12 14:48:08.359118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 [2024-07-12 14:48:08.407115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.827 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:29.827 "poll_groups": [ 00:08:29.827 { 00:08:29.827 "admin_qpairs": 2, 00:08:29.827 "completed_nvme_io": 116, 00:08:29.827 "current_admin_qpairs": 0, 00:08:29.827 "current_io_qpairs": 0, 00:08:29.827 "io_qpairs": 16, 00:08:29.827 "name": "nvmf_tgt_poll_group_000", 00:08:29.827 "pending_bdev_io": 0, 00:08:29.827 "transports": [ 00:08:29.827 { 00:08:29.827 "trtype": "TCP" 00:08:29.828 } 00:08:29.828 ] 00:08:29.828 }, 00:08:29.828 { 00:08:29.828 "admin_qpairs": 3, 00:08:29.828 "completed_nvme_io": 116, 00:08:29.828 "current_admin_qpairs": 0, 00:08:29.828 "current_io_qpairs": 0, 00:08:29.828 "io_qpairs": 17, 00:08:29.828 "name": "nvmf_tgt_poll_group_001", 00:08:29.828 "pending_bdev_io": 0, 00:08:29.828 "transports": [ 00:08:29.828 { 00:08:29.828 "trtype": "TCP" 00:08:29.828 } 00:08:29.828 ] 00:08:29.828 }, 00:08:29.828 { 00:08:29.828 "admin_qpairs": 1, 00:08:29.828 "completed_nvme_io": 70, 00:08:29.828 "current_admin_qpairs": 0, 00:08:29.828 "current_io_qpairs": 0, 00:08:29.828 "io_qpairs": 19, 00:08:29.828 "name": "nvmf_tgt_poll_group_002", 00:08:29.828 "pending_bdev_io": 0, 00:08:29.828 "transports": [ 00:08:29.828 { 00:08:29.828 "trtype": "TCP" 00:08:29.828 } 00:08:29.828 ] 00:08:29.828 }, 00:08:29.828 { 00:08:29.828 "admin_qpairs": 1, 00:08:29.828 "completed_nvme_io": 118, 00:08:29.828 "current_admin_qpairs": 0, 00:08:29.828 "current_io_qpairs": 0, 00:08:29.828 "io_qpairs": 18, 00:08:29.828 "name": "nvmf_tgt_poll_group_003", 00:08:29.828 "pending_bdev_io": 0, 00:08:29.828 "transports": [ 00:08:29.828 { 00:08:29.828 "trtype": "TCP" 00:08:29.828 } 00:08:29.828 ] 00:08:29.828 } 00:08:29.828 ], 00:08:29.828 "tick_rate": 2200000000 00:08:29.828 }' 00:08:29.828 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:29.828 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:29.828 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:29.828 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.086 rmmod nvme_tcp 00:08:30.086 rmmod nvme_fabrics 00:08:30.086 rmmod nvme_keyring 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67191 ']' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67191 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67191 ']' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67191 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67191 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.086 killing process with pid 67191 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67191' 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67191 00:08:30.086 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67191 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:30.345 00:08:30.345 real 0m18.640s 00:08:30.345 user 1m10.037s 00:08:30.345 sys 0m2.568s 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.345 14:48:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.345 ************************************ 00:08:30.345 END TEST nvmf_rpc 00:08:30.345 ************************************ 00:08:30.345 14:48:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.345 14:48:08 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:30.345 14:48:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.345 14:48:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.345 14:48:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.345 ************************************ 00:08:30.345 START TEST nvmf_invalid 00:08:30.345 ************************************ 00:08:30.345 14:48:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:30.604 * Looking for test storage... 00:08:30.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:30.604 Cannot find device "nvmf_tgt_br" 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.604 Cannot find device "nvmf_tgt_br2" 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:30.604 Cannot find device "nvmf_tgt_br" 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:30.604 Cannot find device "nvmf_tgt_br2" 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.604 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:30.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:30.862 00:08:30.862 --- 10.0.0.2 ping statistics --- 00:08:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.862 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:30.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:08:30.862 00:08:30.862 --- 10.0.0.3 ping statistics --- 00:08:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.862 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:30.862 00:08:30.862 --- 10.0.0.1 ping statistics --- 00:08:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.862 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.862 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67702 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67702 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67702 ']' 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.863 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:30.863 [2024-07-12 14:48:09.466486] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:08:30.863 [2024-07-12 14:48:09.466586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.121 [2024-07-12 14:48:09.600329] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.121 [2024-07-12 14:48:09.662772] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.121 [2024-07-12 14:48:09.662825] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.121 [2024-07-12 14:48:09.662837] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.121 [2024-07-12 14:48:09.662845] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.121 [2024-07-12 14:48:09.662852] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.121 [2024-07-12 14:48:09.662948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.121 [2024-07-12 14:48:09.663731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.121 [2024-07-12 14:48:09.663797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.121 [2024-07-12 14:48:09.663808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.121 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.121 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:31.121 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.121 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.121 14:48:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.379 14:48:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.379 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:31.379 14:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5139 00:08:31.638 [2024-07-12 14:48:10.089264] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:31.638 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5139 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:31.638 request: 00:08:31.638 { 00:08:31.638 "method": "nvmf_create_subsystem", 00:08:31.638 "params": { 00:08:31.638 "nqn": "nqn.2016-06.io.spdk:cnode5139", 00:08:31.638 "tgt_name": "foobar" 00:08:31.638 } 00:08:31.638 } 00:08:31.638 Got JSON-RPC error response 00:08:31.638 GoRPCClient: error on JSON-RPC call' 00:08:31.638 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5139 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:31.638 request: 00:08:31.638 { 00:08:31.638 "method": "nvmf_create_subsystem", 00:08:31.638 "params": { 00:08:31.638 "nqn": "nqn.2016-06.io.spdk:cnode5139", 00:08:31.638 "tgt_name": "foobar" 00:08:31.638 } 00:08:31.638 } 00:08:31.638 Got JSON-RPC error response 00:08:31.638 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:31.638 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:31.638 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13977 00:08:31.895 [2024-07-12 14:48:10.393615] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13977: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:31.895 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13977 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:31.895 request: 00:08:31.895 { 00:08:31.895 "method": "nvmf_create_subsystem", 00:08:31.895 "params": { 00:08:31.895 "nqn": "nqn.2016-06.io.spdk:cnode13977", 00:08:31.895 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:31.895 } 00:08:31.895 } 00:08:31.895 Got JSON-RPC error response 00:08:31.895 GoRPCClient: error on JSON-RPC call' 00:08:31.895 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13977 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:31.895 request: 00:08:31.895 { 00:08:31.895 "method": "nvmf_create_subsystem", 00:08:31.895 "params": { 00:08:31.895 "nqn": "nqn.2016-06.io.spdk:cnode13977", 00:08:31.895 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:31.895 } 00:08:31.895 } 00:08:31.895 Got JSON-RPC error response 00:08:31.895 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:31.895 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:31.895 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12075 00:08:32.152 [2024-07-12 14:48:10.637830] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12075: invalid model number 'SPDK_Controller' 00:08:32.152 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12075], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:32.152 request: 00:08:32.152 { 00:08:32.152 "method": "nvmf_create_subsystem", 00:08:32.152 "params": { 00:08:32.152 "nqn": "nqn.2016-06.io.spdk:cnode12075", 00:08:32.152 "model_number": "SPDK_Controller\u001f" 00:08:32.152 } 00:08:32.152 } 00:08:32.152 Got JSON-RPC error response 00:08:32.152 GoRPCClient: error on JSON-RPC call' 00:08:32.152 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12075], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:32.152 request: 00:08:32.152 { 00:08:32.152 "method": "nvmf_create_subsystem", 00:08:32.152 "params": { 00:08:32.152 "nqn": "nqn.2016-06.io.spdk:cnode12075", 00:08:32.152 "model_number": "SPDK_Controller\u001f" 00:08:32.152 } 00:08:32.152 } 00:08:32.152 Got JSON-RPC error response 00:08:32.152 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:32.152 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:32.152 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.153 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '2ZJOz2\BU NGd*1H'\''JFjO' 00:08:32.154 14:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '2ZJOz2\BU NGd*1H'\''JFjO' nqn.2016-06.io.spdk:cnode29430 00:08:32.412 [2024-07-12 14:48:10.994134] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29430: invalid serial number '2ZJOz2\BU NGd*1H'JFjO' 00:08:32.412 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29430 serial_number:2ZJOz2\BU NGd*1H'\''JFjO], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 2ZJOz2\BU NGd*1H'\''JFjO 00:08:32.412 request: 00:08:32.412 { 00:08:32.412 "method": "nvmf_create_subsystem", 00:08:32.412 "params": { 00:08:32.412 "nqn": "nqn.2016-06.io.spdk:cnode29430", 00:08:32.412 "serial_number": "2ZJOz2\\BU NGd*1H'\''JFjO" 00:08:32.412 } 00:08:32.412 } 00:08:32.412 Got JSON-RPC error response 00:08:32.412 GoRPCClient: error on JSON-RPC call' 00:08:32.412 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/12 14:48:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29430 serial_number:2ZJOz2\BU NGd*1H'JFjO], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 2ZJOz2\BU NGd*1H'JFjO 00:08:32.412 request: 00:08:32.412 { 00:08:32.412 "method": "nvmf_create_subsystem", 00:08:32.412 "params": { 00:08:32.412 "nqn": "nqn.2016-06.io.spdk:cnode29430", 00:08:32.412 "serial_number": "2ZJOz2\\BU NGd*1H'JFjO" 00:08:32.412 } 00:08:32.412 } 00:08:32.413 Got JSON-RPC error response 00:08:32.413 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:32.413 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.671 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:08:32.672 14:48:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Wl5/H11u5cX< /dev/null' 00:08:35.773 14:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.773 14:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.773 ************************************ 00:08:35.773 END TEST nvmf_invalid 00:08:35.773 ************************************ 00:08:35.773 00:08:35.773 real 0m5.413s 00:08:35.773 user 0m22.103s 00:08:35.773 sys 0m1.123s 00:08:35.773 14:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.773 14:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:35.773 14:48:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.773 14:48:14 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:35.773 14:48:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.773 14:48:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.773 14:48:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.773 ************************************ 00:08:35.773 START TEST nvmf_abort 00:08:35.773 ************************************ 00:08:35.773 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:36.032 * Looking for test storage... 00:08:36.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:36.032 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:36.033 Cannot find device "nvmf_tgt_br" 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.033 Cannot find device "nvmf_tgt_br2" 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:36.033 Cannot find device "nvmf_tgt_br" 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:36.033 Cannot find device "nvmf_tgt_br2" 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.033 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.291 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.291 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.291 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.291 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:36.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:08:36.292 00:08:36.292 --- 10.0.0.2 ping statistics --- 00:08:36.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.292 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:36.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:36.292 00:08:36.292 --- 10.0.0.3 ping statistics --- 00:08:36.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.292 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:36.292 00:08:36.292 --- 10.0.0.1 ping statistics --- 00:08:36.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.292 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68194 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68194 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68194 ']' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.292 14:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.550 [2024-07-12 14:48:14.954367] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:08:36.550 [2024-07-12 14:48:14.954442] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.550 [2024-07-12 14:48:15.092815] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.550 [2024-07-12 14:48:15.161063] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.550 [2024-07-12 14:48:15.161304] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.550 [2024-07-12 14:48:15.161492] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.550 [2024-07-12 14:48:15.161774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.550 [2024-07-12 14:48:15.161818] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.550 [2024-07-12 14:48:15.162107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.550 [2024-07-12 14:48:15.162198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.550 [2024-07-12 14:48:15.162204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 [2024-07-12 14:48:15.288408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 Malloc0 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 Delay0 00:08:36.808 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.809 [2024-07-12 14:48:15.351999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.809 14:48:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:37.067 [2024-07-12 14:48:15.535819] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:38.969 Initializing NVMe Controllers 00:08:38.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:38.969 controller IO queue size 128 less than required 00:08:38.969 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:38.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:38.969 Initialization complete. Launching workers. 00:08:38.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31090 00:08:38.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31151, failed to submit 62 00:08:38.969 success 31094, unsuccess 57, failed 0 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.969 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:39.228 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.228 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:39.228 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.228 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.228 rmmod nvme_tcp 00:08:39.228 rmmod nvme_fabrics 00:08:39.228 rmmod nvme_keyring 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68194 ']' 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68194 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68194 ']' 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68194 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68194 00:08:39.229 killing process with pid 68194 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68194' 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68194 00:08:39.229 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68194 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.488 14:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:39.488 00:08:39.489 real 0m3.509s 00:08:39.489 user 0m9.935s 00:08:39.489 sys 0m0.878s 00:08:39.489 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.489 14:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:39.489 ************************************ 00:08:39.489 END TEST nvmf_abort 00:08:39.489 ************************************ 00:08:39.489 14:48:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.489 14:48:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:39.489 14:48:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.489 14:48:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.489 14:48:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.489 ************************************ 00:08:39.489 START TEST nvmf_ns_hotplug_stress 00:08:39.489 ************************************ 00:08:39.489 14:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:39.489 * Looking for test storage... 00:08:39.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:39.489 Cannot find device "nvmf_tgt_br" 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.489 Cannot find device "nvmf_tgt_br2" 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:39.489 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:39.838 Cannot find device "nvmf_tgt_br" 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:39.838 Cannot find device "nvmf_tgt_br2" 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.838 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:39.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:39.839 00:08:39.839 --- 10.0.0.2 ping statistics --- 00:08:39.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.839 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:39.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:39.839 00:08:39.839 --- 10.0.0.3 ping statistics --- 00:08:39.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.839 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:39.839 00:08:39.839 --- 10.0.0.1 ping statistics --- 00:08:39.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.839 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68424 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68424 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68424 ']' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.839 14:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.098 [2024-07-12 14:48:18.503565] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:08:40.098 [2024-07-12 14:48:18.503934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.098 [2024-07-12 14:48:18.643457] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.098 [2024-07-12 14:48:18.724841] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.098 [2024-07-12 14:48:18.725094] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.098 [2024-07-12 14:48:18.725257] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.098 [2024-07-12 14:48:18.725487] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.098 [2024-07-12 14:48:18.725548] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.098 [2024-07-12 14:48:18.725859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.098 [2024-07-12 14:48:18.725956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.098 [2024-07-12 14:48:18.725962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:41.033 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:41.291 [2024-07-12 14:48:19.800355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.291 14:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.549 14:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.807 [2024-07-12 14:48:20.286582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.807 14:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.065 14:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:42.323 Malloc0 00:08:42.323 14:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:42.581 Delay0 00:08:42.581 14:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.839 14:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:43.096 NULL1 00:08:43.096 14:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:43.661 14:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68556 00:08:43.661 14:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:43.661 14:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:43.661 14:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.594 Read completed with error (sct=0, sc=11) 00:08:44.852 14:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.110 14:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:45.110 14:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:45.368 true 00:08:45.368 14:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:45.368 14:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.318 14:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.318 14:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:46.318 14:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:46.575 true 00:08:46.575 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:46.575 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.833 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.090 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:47.090 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:47.346 true 00:08:47.346 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:47.346 14:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.911 14:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.911 14:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:47.911 14:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:48.168 true 00:08:48.168 14:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:48.168 14:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.099 14:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.358 14:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:49.358 14:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:49.614 true 00:08:49.614 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:49.614 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.872 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.128 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:50.128 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:50.384 true 00:08:50.384 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:50.384 14:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.641 14:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.900 14:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:50.900 14:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:51.158 true 00:08:51.158 14:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:51.158 14:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.090 14:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.346 14:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:52.346 14:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:52.603 true 00:08:52.603 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:52.603 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.861 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.118 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:53.118 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:53.376 true 00:08:53.376 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:53.376 14:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.633 14:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.891 14:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:53.891 14:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:54.150 true 00:08:54.418 14:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:54.418 14:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.351 14:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.351 14:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:55.351 14:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:55.916 true 00:08:55.916 14:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:55.916 14:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.174 14:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.432 14:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:56.432 14:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:56.432 true 00:08:56.690 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:56.691 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.950 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.208 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:57.208 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:57.466 true 00:08:57.466 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:57.466 14:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.723 14:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.981 14:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:57.981 14:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:58.239 true 00:08:58.239 14:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:58.239 14:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.184 14:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.441 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:59.441 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:59.698 true 00:08:59.698 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:08:59.698 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.262 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.519 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:00.519 14:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:00.777 true 00:09:00.777 14:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:00.777 14:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.035 14:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.292 14:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:01.292 14:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:01.578 true 00:09:01.578 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:01.578 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.835 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.094 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:02.094 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:02.351 true 00:09:02.351 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:02.351 14:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.284 14:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.541 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:03.541 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:03.799 true 00:09:03.799 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:03.799 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.056 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.314 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:04.314 14:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:04.576 true 00:09:04.576 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:04.576 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.142 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.142 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:05.142 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:05.400 true 00:09:05.400 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:05.400 14:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.657 14:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.223 14:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:06.223 14:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:06.223 true 00:09:06.223 14:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:06.223 14:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.155 14:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.414 14:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:07.414 14:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:07.672 true 00:09:07.672 14:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:07.672 14:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.929 14:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.187 14:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:08.187 14:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:08.445 true 00:09:08.445 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:08.445 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.703 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.268 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:09.268 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:09.526 true 00:09:09.526 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:09.526 14:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.091 14:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.673 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:10.673 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:10.673 true 00:09:10.673 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:10.673 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.931 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.495 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:11.495 14:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:11.752 true 00:09:11.752 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:11.752 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.027 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.285 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:12.285 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:12.542 true 00:09:12.543 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:12.543 14:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.800 14:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.058 14:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:13.058 14:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:13.315 true 00:09:13.315 14:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:13.315 14:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.250 Initializing NVMe Controllers 00:09:14.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:14.250 Controller IO queue size 128, less than required. 00:09:14.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:14.250 Controller IO queue size 128, less than required. 00:09:14.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:14.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:14.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:14.250 Initialization complete. Launching workers. 00:09:14.250 ======================================================== 00:09:14.250 Latency(us) 00:09:14.250 Device Information : IOPS MiB/s Average min max 00:09:14.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 347.03 0.17 134359.88 3193.30 1111117.04 00:09:14.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6884.56 3.36 18592.54 3470.42 621436.05 00:09:14.250 ======================================================== 00:09:14.250 Total : 7231.59 3.53 24148.04 3193.30 1111117.04 00:09:14.250 00:09:14.250 14:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.508 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:14.508 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:14.766 true 00:09:14.766 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68556 00:09:14.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68556) - No such process 00:09:14.766 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68556 00:09:14.766 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.023 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.280 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:15.280 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:15.280 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:15.280 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.280 14:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:15.538 null0 00:09:15.538 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.538 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.538 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:15.797 null1 00:09:15.797 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.797 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.797 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:16.055 null2 00:09:16.055 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.055 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.055 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:16.312 null3 00:09:16.312 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.312 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.312 14:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:16.570 null4 00:09:16.570 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.570 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.570 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:16.827 null5 00:09:16.827 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.827 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.827 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:17.085 null6 00:09:17.085 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.085 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.085 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:17.345 null7 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69619 69620 69623 69625 69626 69628 69631 69633 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.345 14:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.604 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.861 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.118 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.375 14:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.632 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.633 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.890 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.890 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.890 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.890 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.891 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.148 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.149 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.149 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.149 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.149 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.149 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.406 14:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.406 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.406 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.664 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.922 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.180 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.438 14:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.438 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.696 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.953 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.209 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.210 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.210 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.210 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.465 14:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.465 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.722 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.979 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.236 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.494 14:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.494 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.494 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.494 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.751 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.009 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.267 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.525 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.525 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.525 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.525 14:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.525 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.783 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.039 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.039 rmmod nvme_tcp 00:09:24.297 rmmod nvme_fabrics 00:09:24.297 rmmod nvme_keyring 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68424 ']' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68424 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68424 ']' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68424 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68424 00:09:24.297 killing process with pid 68424 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68424' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68424 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68424 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.297 00:09:24.297 real 0m44.971s 00:09:24.297 user 3m41.494s 00:09:24.297 sys 0m12.961s 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.297 14:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.297 ************************************ 00:09:24.297 END TEST nvmf_ns_hotplug_stress 00:09:24.297 ************************************ 00:09:24.556 14:49:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:24.556 14:49:02 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:24.556 14:49:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.556 14:49:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.556 14:49:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.556 ************************************ 00:09:24.556 START TEST nvmf_connect_stress 00:09:24.556 ************************************ 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:24.556 * Looking for test storage... 00:09:24.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.556 Cannot find device "nvmf_tgt_br" 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.556 Cannot find device "nvmf_tgt_br2" 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.556 Cannot find device "nvmf_tgt_br" 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.556 Cannot find device "nvmf_tgt_br2" 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.556 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:24.814 00:09:24.814 --- 10.0.0.2 ping statistics --- 00:09:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.814 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:24.814 00:09:24.814 --- 10.0.0.3 ping statistics --- 00:09:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.814 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:24.814 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:24.814 00:09:24.814 --- 10.0.0.1 ping statistics --- 00:09:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.815 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70949 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70949 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 70949 ']' 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.815 14:49:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.073 [2024-07-12 14:49:03.484787] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:09:25.073 [2024-07-12 14:49:03.484874] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.073 [2024-07-12 14:49:03.621885] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.073 [2024-07-12 14:49:03.693964] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.073 [2024-07-12 14:49:03.694024] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.073 [2024-07-12 14:49:03.694037] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.073 [2024-07-12 14:49:03.694047] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.073 [2024-07-12 14:49:03.694056] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.073 [2024-07-12 14:49:03.694193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.073 [2024-07-12 14:49:03.694924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.073 [2024-07-12 14:49:03.694939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.007 [2024-07-12 14:49:04.507744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.007 [2024-07-12 14:49:04.525013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.007 NULL1 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71001 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.007 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.008 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.575 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.575 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:26.575 14:49:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.575 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.575 14:49:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.832 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.832 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:26.832 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.832 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.832 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.089 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.089 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:27.089 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.089 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.089 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.346 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.346 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:27.346 14:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.346 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.346 14:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.604 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.604 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:27.604 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.604 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.604 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.168 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.168 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:28.168 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.168 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.168 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.425 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.425 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:28.425 14:49:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.425 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.425 14:49:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.682 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.682 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:28.682 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.682 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.682 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.939 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.940 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:28.940 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.940 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.940 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.201 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.201 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:29.201 14:49:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.201 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.201 14:49:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.766 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:29.766 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.766 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.766 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.023 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.023 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:30.023 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.023 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.023 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.281 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:30.281 14:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.281 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.281 14:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.540 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.540 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:30.540 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.540 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.540 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.798 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.798 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:30.798 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.798 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.798 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.364 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.364 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:31.364 14:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.364 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.364 14:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.621 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.622 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:31.622 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.622 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.622 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.952 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.952 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:31.952 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.952 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.952 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.214 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.214 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:32.214 14:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.214 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.214 14:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.470 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.470 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:32.470 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.470 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.470 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.728 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.728 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:32.728 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.728 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.728 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.293 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:33.293 14:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.293 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.293 14:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.550 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.550 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:33.550 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.550 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.550 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.808 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.808 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:33.808 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.808 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.808 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.066 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.066 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:34.066 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.066 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.066 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.323 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.323 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:34.323 14:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.323 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.323 14:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.887 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.887 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:34.887 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.887 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.887 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.144 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.144 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:35.144 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.144 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.144 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.403 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.403 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:35.403 14:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.403 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.403 14:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.661 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.661 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:35.661 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.661 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.661 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.228 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.228 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:36.228 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.228 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.228 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.228 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71001 00:09:36.487 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71001) - No such process 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71001 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.487 rmmod nvme_tcp 00:09:36.487 rmmod nvme_fabrics 00:09:36.487 rmmod nvme_keyring 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70949 ']' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70949 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 70949 ']' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 70949 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70949 00:09:36.487 killing process with pid 70949 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70949' 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 70949 00:09:36.487 14:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 70949 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:36.746 00:09:36.746 real 0m12.194s 00:09:36.746 user 0m40.787s 00:09:36.746 sys 0m3.278s 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.746 14:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.746 ************************************ 00:09:36.746 END TEST nvmf_connect_stress 00:09:36.746 ************************************ 00:09:36.746 14:49:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.746 14:49:15 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:36.746 14:49:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.746 14:49:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.746 14:49:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.746 ************************************ 00:09:36.746 START TEST nvmf_fused_ordering 00:09:36.746 ************************************ 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:36.746 * Looking for test storage... 00:09:36.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:36.746 Cannot find device "nvmf_tgt_br" 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.746 Cannot find device "nvmf_tgt_br2" 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:36.746 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:36.746 Cannot find device "nvmf_tgt_br" 00:09:36.747 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:09:36.747 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.005 Cannot find device "nvmf_tgt_br2" 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.005 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:37.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:09:37.264 00:09:37.264 --- 10.0.0.2 ping statistics --- 00:09:37.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.264 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:37.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:37.264 00:09:37.264 --- 10.0.0.3 ping statistics --- 00:09:37.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.264 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:37.264 00:09:37.264 --- 10.0.0.1 ping statistics --- 00:09:37.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.264 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71327 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71327 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71327 ']' 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.264 14:49:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.264 [2024-07-12 14:49:15.789272] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:09:37.264 [2024-07-12 14:49:15.789728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.532 [2024-07-12 14:49:15.933785] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.532 [2024-07-12 14:49:16.003890] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.532 [2024-07-12 14:49:16.003965] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.532 [2024-07-12 14:49:16.003984] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.532 [2024-07-12 14:49:16.003994] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.532 [2024-07-12 14:49:16.004003] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.532 [2024-07-12 14:49:16.004033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.131 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 [2024-07-12 14:49:16.786070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 [2024-07-12 14:49:16.806138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 NULL1 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.389 14:49:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:38.389 [2024-07-12 14:49:16.855684] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:09:38.389 [2024-07-12 14:49:16.855738] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71377 ] 00:09:38.956 Attached to nqn.2016-06.io.spdk:cnode1 00:09:38.956 Namespace ID: 1 size: 1GB 00:09:38.956 fused_ordering(0) 00:09:38.956 fused_ordering(1) 00:09:38.956 fused_ordering(2) 00:09:38.956 fused_ordering(3) 00:09:38.956 fused_ordering(4) 00:09:38.956 fused_ordering(5) 00:09:38.956 fused_ordering(6) 00:09:38.956 fused_ordering(7) 00:09:38.956 fused_ordering(8) 00:09:38.956 fused_ordering(9) 00:09:38.956 fused_ordering(10) 00:09:38.956 fused_ordering(11) 00:09:38.956 fused_ordering(12) 00:09:38.957 fused_ordering(13) 00:09:38.957 fused_ordering(14) 00:09:38.957 fused_ordering(15) 00:09:38.957 fused_ordering(16) 00:09:38.957 fused_ordering(17) 00:09:38.957 fused_ordering(18) 00:09:38.957 fused_ordering(19) 00:09:38.957 fused_ordering(20) 00:09:38.957 fused_ordering(21) 00:09:38.957 fused_ordering(22) 00:09:38.957 fused_ordering(23) 00:09:38.957 fused_ordering(24) 00:09:38.957 fused_ordering(25) 00:09:38.957 fused_ordering(26) 00:09:38.957 fused_ordering(27) 00:09:38.957 fused_ordering(28) 00:09:38.957 fused_ordering(29) 00:09:38.957 fused_ordering(30) 00:09:38.957 fused_ordering(31) 00:09:38.957 fused_ordering(32) 00:09:38.957 fused_ordering(33) 00:09:38.957 fused_ordering(34) 00:09:38.957 fused_ordering(35) 00:09:38.957 fused_ordering(36) 00:09:38.957 fused_ordering(37) 00:09:38.957 fused_ordering(38) 00:09:38.957 fused_ordering(39) 00:09:38.957 fused_ordering(40) 00:09:38.957 fused_ordering(41) 00:09:38.957 fused_ordering(42) 00:09:38.957 fused_ordering(43) 00:09:38.957 fused_ordering(44) 00:09:38.957 fused_ordering(45) 00:09:38.957 fused_ordering(46) 00:09:38.957 fused_ordering(47) 00:09:38.957 fused_ordering(48) 00:09:38.957 fused_ordering(49) 00:09:38.957 fused_ordering(50) 00:09:38.957 fused_ordering(51) 00:09:38.957 fused_ordering(52) 00:09:38.957 fused_ordering(53) 00:09:38.957 fused_ordering(54) 00:09:38.957 fused_ordering(55) 00:09:38.957 fused_ordering(56) 00:09:38.957 fused_ordering(57) 00:09:38.957 fused_ordering(58) 00:09:38.957 fused_ordering(59) 00:09:38.957 fused_ordering(60) 00:09:38.957 fused_ordering(61) 00:09:38.957 fused_ordering(62) 00:09:38.957 fused_ordering(63) 00:09:38.957 fused_ordering(64) 00:09:38.957 fused_ordering(65) 00:09:38.957 fused_ordering(66) 00:09:38.957 fused_ordering(67) 00:09:38.957 fused_ordering(68) 00:09:38.957 fused_ordering(69) 00:09:38.957 fused_ordering(70) 00:09:38.957 fused_ordering(71) 00:09:38.957 fused_ordering(72) 00:09:38.957 fused_ordering(73) 00:09:38.957 fused_ordering(74) 00:09:38.957 fused_ordering(75) 00:09:38.957 fused_ordering(76) 00:09:38.957 fused_ordering(77) 00:09:38.957 fused_ordering(78) 00:09:38.957 fused_ordering(79) 00:09:38.957 fused_ordering(80) 00:09:38.957 fused_ordering(81) 00:09:38.957 fused_ordering(82) 00:09:38.957 fused_ordering(83) 00:09:38.957 fused_ordering(84) 00:09:38.957 fused_ordering(85) 00:09:38.957 fused_ordering(86) 00:09:38.957 fused_ordering(87) 00:09:38.957 fused_ordering(88) 00:09:38.957 fused_ordering(89) 00:09:38.957 fused_ordering(90) 00:09:38.957 fused_ordering(91) 00:09:38.957 fused_ordering(92) 00:09:38.957 fused_ordering(93) 00:09:38.957 fused_ordering(94) 00:09:38.957 fused_ordering(95) 00:09:38.957 fused_ordering(96) 00:09:38.957 fused_ordering(97) 00:09:38.957 fused_ordering(98) 00:09:38.957 fused_ordering(99) 00:09:38.957 fused_ordering(100) 00:09:38.957 fused_ordering(101) 00:09:38.957 fused_ordering(102) 00:09:38.957 fused_ordering(103) 00:09:38.957 fused_ordering(104) 00:09:38.957 fused_ordering(105) 00:09:38.957 fused_ordering(106) 00:09:38.957 fused_ordering(107) 00:09:38.957 fused_ordering(108) 00:09:38.957 fused_ordering(109) 00:09:38.957 fused_ordering(110) 00:09:38.957 fused_ordering(111) 00:09:38.957 fused_ordering(112) 00:09:38.957 fused_ordering(113) 00:09:38.957 fused_ordering(114) 00:09:38.957 fused_ordering(115) 00:09:38.957 fused_ordering(116) 00:09:38.957 fused_ordering(117) 00:09:38.957 fused_ordering(118) 00:09:38.957 fused_ordering(119) 00:09:38.957 fused_ordering(120) 00:09:38.957 fused_ordering(121) 00:09:38.957 fused_ordering(122) 00:09:38.957 fused_ordering(123) 00:09:38.957 fused_ordering(124) 00:09:38.957 fused_ordering(125) 00:09:38.957 fused_ordering(126) 00:09:38.957 fused_ordering(127) 00:09:38.957 fused_ordering(128) 00:09:38.957 fused_ordering(129) 00:09:38.957 fused_ordering(130) 00:09:38.957 fused_ordering(131) 00:09:38.957 fused_ordering(132) 00:09:38.957 fused_ordering(133) 00:09:38.957 fused_ordering(134) 00:09:38.957 fused_ordering(135) 00:09:38.957 fused_ordering(136) 00:09:38.957 fused_ordering(137) 00:09:38.957 fused_ordering(138) 00:09:38.957 fused_ordering(139) 00:09:38.957 fused_ordering(140) 00:09:38.957 fused_ordering(141) 00:09:38.957 fused_ordering(142) 00:09:38.957 fused_ordering(143) 00:09:38.957 fused_ordering(144) 00:09:38.957 fused_ordering(145) 00:09:38.957 fused_ordering(146) 00:09:38.957 fused_ordering(147) 00:09:38.957 fused_ordering(148) 00:09:38.957 fused_ordering(149) 00:09:38.957 fused_ordering(150) 00:09:38.957 fused_ordering(151) 00:09:38.957 fused_ordering(152) 00:09:38.957 fused_ordering(153) 00:09:38.957 fused_ordering(154) 00:09:38.957 fused_ordering(155) 00:09:38.957 fused_ordering(156) 00:09:38.957 fused_ordering(157) 00:09:38.957 fused_ordering(158) 00:09:38.957 fused_ordering(159) 00:09:38.957 fused_ordering(160) 00:09:38.957 fused_ordering(161) 00:09:38.957 fused_ordering(162) 00:09:38.957 fused_ordering(163) 00:09:38.957 fused_ordering(164) 00:09:38.957 fused_ordering(165) 00:09:38.957 fused_ordering(166) 00:09:38.957 fused_ordering(167) 00:09:38.957 fused_ordering(168) 00:09:38.957 fused_ordering(169) 00:09:38.957 fused_ordering(170) 00:09:38.957 fused_ordering(171) 00:09:38.957 fused_ordering(172) 00:09:38.957 fused_ordering(173) 00:09:38.957 fused_ordering(174) 00:09:38.957 fused_ordering(175) 00:09:38.957 fused_ordering(176) 00:09:38.957 fused_ordering(177) 00:09:38.957 fused_ordering(178) 00:09:38.957 fused_ordering(179) 00:09:38.957 fused_ordering(180) 00:09:38.957 fused_ordering(181) 00:09:38.957 fused_ordering(182) 00:09:38.957 fused_ordering(183) 00:09:38.957 fused_ordering(184) 00:09:38.957 fused_ordering(185) 00:09:38.957 fused_ordering(186) 00:09:38.957 fused_ordering(187) 00:09:38.957 fused_ordering(188) 00:09:38.957 fused_ordering(189) 00:09:38.957 fused_ordering(190) 00:09:38.957 fused_ordering(191) 00:09:38.957 fused_ordering(192) 00:09:38.957 fused_ordering(193) 00:09:38.957 fused_ordering(194) 00:09:38.957 fused_ordering(195) 00:09:38.957 fused_ordering(196) 00:09:38.957 fused_ordering(197) 00:09:38.957 fused_ordering(198) 00:09:38.957 fused_ordering(199) 00:09:38.957 fused_ordering(200) 00:09:38.957 fused_ordering(201) 00:09:38.957 fused_ordering(202) 00:09:38.957 fused_ordering(203) 00:09:38.957 fused_ordering(204) 00:09:38.957 fused_ordering(205) 00:09:39.217 fused_ordering(206) 00:09:39.217 fused_ordering(207) 00:09:39.217 fused_ordering(208) 00:09:39.217 fused_ordering(209) 00:09:39.217 fused_ordering(210) 00:09:39.217 fused_ordering(211) 00:09:39.217 fused_ordering(212) 00:09:39.217 fused_ordering(213) 00:09:39.217 fused_ordering(214) 00:09:39.217 fused_ordering(215) 00:09:39.217 fused_ordering(216) 00:09:39.217 fused_ordering(217) 00:09:39.217 fused_ordering(218) 00:09:39.217 fused_ordering(219) 00:09:39.217 fused_ordering(220) 00:09:39.217 fused_ordering(221) 00:09:39.217 fused_ordering(222) 00:09:39.217 fused_ordering(223) 00:09:39.217 fused_ordering(224) 00:09:39.217 fused_ordering(225) 00:09:39.217 fused_ordering(226) 00:09:39.217 fused_ordering(227) 00:09:39.217 fused_ordering(228) 00:09:39.217 fused_ordering(229) 00:09:39.217 fused_ordering(230) 00:09:39.217 fused_ordering(231) 00:09:39.217 fused_ordering(232) 00:09:39.217 fused_ordering(233) 00:09:39.217 fused_ordering(234) 00:09:39.217 fused_ordering(235) 00:09:39.217 fused_ordering(236) 00:09:39.217 fused_ordering(237) 00:09:39.217 fused_ordering(238) 00:09:39.217 fused_ordering(239) 00:09:39.217 fused_ordering(240) 00:09:39.217 fused_ordering(241) 00:09:39.217 fused_ordering(242) 00:09:39.217 fused_ordering(243) 00:09:39.217 fused_ordering(244) 00:09:39.217 fused_ordering(245) 00:09:39.217 fused_ordering(246) 00:09:39.217 fused_ordering(247) 00:09:39.217 fused_ordering(248) 00:09:39.217 fused_ordering(249) 00:09:39.217 fused_ordering(250) 00:09:39.217 fused_ordering(251) 00:09:39.217 fused_ordering(252) 00:09:39.217 fused_ordering(253) 00:09:39.217 fused_ordering(254) 00:09:39.217 fused_ordering(255) 00:09:39.217 fused_ordering(256) 00:09:39.217 fused_ordering(257) 00:09:39.217 fused_ordering(258) 00:09:39.217 fused_ordering(259) 00:09:39.217 fused_ordering(260) 00:09:39.217 fused_ordering(261) 00:09:39.217 fused_ordering(262) 00:09:39.217 fused_ordering(263) 00:09:39.217 fused_ordering(264) 00:09:39.217 fused_ordering(265) 00:09:39.217 fused_ordering(266) 00:09:39.217 fused_ordering(267) 00:09:39.217 fused_ordering(268) 00:09:39.217 fused_ordering(269) 00:09:39.217 fused_ordering(270) 00:09:39.217 fused_ordering(271) 00:09:39.217 fused_ordering(272) 00:09:39.217 fused_ordering(273) 00:09:39.217 fused_ordering(274) 00:09:39.217 fused_ordering(275) 00:09:39.217 fused_ordering(276) 00:09:39.217 fused_ordering(277) 00:09:39.217 fused_ordering(278) 00:09:39.217 fused_ordering(279) 00:09:39.217 fused_ordering(280) 00:09:39.217 fused_ordering(281) 00:09:39.217 fused_ordering(282) 00:09:39.217 fused_ordering(283) 00:09:39.217 fused_ordering(284) 00:09:39.217 fused_ordering(285) 00:09:39.217 fused_ordering(286) 00:09:39.217 fused_ordering(287) 00:09:39.217 fused_ordering(288) 00:09:39.217 fused_ordering(289) 00:09:39.217 fused_ordering(290) 00:09:39.217 fused_ordering(291) 00:09:39.217 fused_ordering(292) 00:09:39.217 fused_ordering(293) 00:09:39.217 fused_ordering(294) 00:09:39.217 fused_ordering(295) 00:09:39.217 fused_ordering(296) 00:09:39.217 fused_ordering(297) 00:09:39.217 fused_ordering(298) 00:09:39.217 fused_ordering(299) 00:09:39.217 fused_ordering(300) 00:09:39.217 fused_ordering(301) 00:09:39.217 fused_ordering(302) 00:09:39.217 fused_ordering(303) 00:09:39.217 fused_ordering(304) 00:09:39.217 fused_ordering(305) 00:09:39.217 fused_ordering(306) 00:09:39.217 fused_ordering(307) 00:09:39.217 fused_ordering(308) 00:09:39.217 fused_ordering(309) 00:09:39.217 fused_ordering(310) 00:09:39.217 fused_ordering(311) 00:09:39.217 fused_ordering(312) 00:09:39.217 fused_ordering(313) 00:09:39.217 fused_ordering(314) 00:09:39.217 fused_ordering(315) 00:09:39.217 fused_ordering(316) 00:09:39.217 fused_ordering(317) 00:09:39.217 fused_ordering(318) 00:09:39.217 fused_ordering(319) 00:09:39.217 fused_ordering(320) 00:09:39.217 fused_ordering(321) 00:09:39.217 fused_ordering(322) 00:09:39.217 fused_ordering(323) 00:09:39.217 fused_ordering(324) 00:09:39.217 fused_ordering(325) 00:09:39.217 fused_ordering(326) 00:09:39.217 fused_ordering(327) 00:09:39.217 fused_ordering(328) 00:09:39.217 fused_ordering(329) 00:09:39.217 fused_ordering(330) 00:09:39.217 fused_ordering(331) 00:09:39.217 fused_ordering(332) 00:09:39.217 fused_ordering(333) 00:09:39.217 fused_ordering(334) 00:09:39.217 fused_ordering(335) 00:09:39.217 fused_ordering(336) 00:09:39.217 fused_ordering(337) 00:09:39.217 fused_ordering(338) 00:09:39.217 fused_ordering(339) 00:09:39.217 fused_ordering(340) 00:09:39.217 fused_ordering(341) 00:09:39.217 fused_ordering(342) 00:09:39.217 fused_ordering(343) 00:09:39.217 fused_ordering(344) 00:09:39.217 fused_ordering(345) 00:09:39.217 fused_ordering(346) 00:09:39.217 fused_ordering(347) 00:09:39.217 fused_ordering(348) 00:09:39.217 fused_ordering(349) 00:09:39.217 fused_ordering(350) 00:09:39.217 fused_ordering(351) 00:09:39.217 fused_ordering(352) 00:09:39.217 fused_ordering(353) 00:09:39.217 fused_ordering(354) 00:09:39.217 fused_ordering(355) 00:09:39.217 fused_ordering(356) 00:09:39.217 fused_ordering(357) 00:09:39.217 fused_ordering(358) 00:09:39.217 fused_ordering(359) 00:09:39.217 fused_ordering(360) 00:09:39.217 fused_ordering(361) 00:09:39.217 fused_ordering(362) 00:09:39.217 fused_ordering(363) 00:09:39.217 fused_ordering(364) 00:09:39.217 fused_ordering(365) 00:09:39.217 fused_ordering(366) 00:09:39.217 fused_ordering(367) 00:09:39.217 fused_ordering(368) 00:09:39.217 fused_ordering(369) 00:09:39.217 fused_ordering(370) 00:09:39.217 fused_ordering(371) 00:09:39.217 fused_ordering(372) 00:09:39.217 fused_ordering(373) 00:09:39.217 fused_ordering(374) 00:09:39.217 fused_ordering(375) 00:09:39.217 fused_ordering(376) 00:09:39.217 fused_ordering(377) 00:09:39.217 fused_ordering(378) 00:09:39.217 fused_ordering(379) 00:09:39.217 fused_ordering(380) 00:09:39.217 fused_ordering(381) 00:09:39.217 fused_ordering(382) 00:09:39.217 fused_ordering(383) 00:09:39.217 fused_ordering(384) 00:09:39.217 fused_ordering(385) 00:09:39.217 fused_ordering(386) 00:09:39.217 fused_ordering(387) 00:09:39.217 fused_ordering(388) 00:09:39.217 fused_ordering(389) 00:09:39.217 fused_ordering(390) 00:09:39.217 fused_ordering(391) 00:09:39.217 fused_ordering(392) 00:09:39.217 fused_ordering(393) 00:09:39.217 fused_ordering(394) 00:09:39.217 fused_ordering(395) 00:09:39.217 fused_ordering(396) 00:09:39.217 fused_ordering(397) 00:09:39.217 fused_ordering(398) 00:09:39.217 fused_ordering(399) 00:09:39.217 fused_ordering(400) 00:09:39.217 fused_ordering(401) 00:09:39.217 fused_ordering(402) 00:09:39.217 fused_ordering(403) 00:09:39.217 fused_ordering(404) 00:09:39.217 fused_ordering(405) 00:09:39.217 fused_ordering(406) 00:09:39.217 fused_ordering(407) 00:09:39.217 fused_ordering(408) 00:09:39.217 fused_ordering(409) 00:09:39.217 fused_ordering(410) 00:09:39.476 fused_ordering(411) 00:09:39.476 fused_ordering(412) 00:09:39.476 fused_ordering(413) 00:09:39.476 fused_ordering(414) 00:09:39.476 fused_ordering(415) 00:09:39.476 fused_ordering(416) 00:09:39.476 fused_ordering(417) 00:09:39.476 fused_ordering(418) 00:09:39.476 fused_ordering(419) 00:09:39.476 fused_ordering(420) 00:09:39.476 fused_ordering(421) 00:09:39.476 fused_ordering(422) 00:09:39.476 fused_ordering(423) 00:09:39.476 fused_ordering(424) 00:09:39.476 fused_ordering(425) 00:09:39.476 fused_ordering(426) 00:09:39.476 fused_ordering(427) 00:09:39.476 fused_ordering(428) 00:09:39.476 fused_ordering(429) 00:09:39.476 fused_ordering(430) 00:09:39.476 fused_ordering(431) 00:09:39.476 fused_ordering(432) 00:09:39.476 fused_ordering(433) 00:09:39.476 fused_ordering(434) 00:09:39.476 fused_ordering(435) 00:09:39.476 fused_ordering(436) 00:09:39.476 fused_ordering(437) 00:09:39.476 fused_ordering(438) 00:09:39.476 fused_ordering(439) 00:09:39.476 fused_ordering(440) 00:09:39.476 fused_ordering(441) 00:09:39.476 fused_ordering(442) 00:09:39.476 fused_ordering(443) 00:09:39.476 fused_ordering(444) 00:09:39.476 fused_ordering(445) 00:09:39.476 fused_ordering(446) 00:09:39.476 fused_ordering(447) 00:09:39.476 fused_ordering(448) 00:09:39.476 fused_ordering(449) 00:09:39.476 fused_ordering(450) 00:09:39.476 fused_ordering(451) 00:09:39.476 fused_ordering(452) 00:09:39.476 fused_ordering(453) 00:09:39.476 fused_ordering(454) 00:09:39.476 fused_ordering(455) 00:09:39.476 fused_ordering(456) 00:09:39.476 fused_ordering(457) 00:09:39.476 fused_ordering(458) 00:09:39.476 fused_ordering(459) 00:09:39.476 fused_ordering(460) 00:09:39.476 fused_ordering(461) 00:09:39.476 fused_ordering(462) 00:09:39.476 fused_ordering(463) 00:09:39.476 fused_ordering(464) 00:09:39.476 fused_ordering(465) 00:09:39.476 fused_ordering(466) 00:09:39.476 fused_ordering(467) 00:09:39.476 fused_ordering(468) 00:09:39.476 fused_ordering(469) 00:09:39.476 fused_ordering(470) 00:09:39.476 fused_ordering(471) 00:09:39.476 fused_ordering(472) 00:09:39.476 fused_ordering(473) 00:09:39.476 fused_ordering(474) 00:09:39.476 fused_ordering(475) 00:09:39.476 fused_ordering(476) 00:09:39.476 fused_ordering(477) 00:09:39.476 fused_ordering(478) 00:09:39.476 fused_ordering(479) 00:09:39.476 fused_ordering(480) 00:09:39.476 fused_ordering(481) 00:09:39.476 fused_ordering(482) 00:09:39.476 fused_ordering(483) 00:09:39.476 fused_ordering(484) 00:09:39.476 fused_ordering(485) 00:09:39.476 fused_ordering(486) 00:09:39.476 fused_ordering(487) 00:09:39.476 fused_ordering(488) 00:09:39.476 fused_ordering(489) 00:09:39.476 fused_ordering(490) 00:09:39.476 fused_ordering(491) 00:09:39.476 fused_ordering(492) 00:09:39.476 fused_ordering(493) 00:09:39.476 fused_ordering(494) 00:09:39.476 fused_ordering(495) 00:09:39.476 fused_ordering(496) 00:09:39.476 fused_ordering(497) 00:09:39.476 fused_ordering(498) 00:09:39.476 fused_ordering(499) 00:09:39.476 fused_ordering(500) 00:09:39.476 fused_ordering(501) 00:09:39.476 fused_ordering(502) 00:09:39.476 fused_ordering(503) 00:09:39.476 fused_ordering(504) 00:09:39.476 fused_ordering(505) 00:09:39.476 fused_ordering(506) 00:09:39.476 fused_ordering(507) 00:09:39.476 fused_ordering(508) 00:09:39.476 fused_ordering(509) 00:09:39.476 fused_ordering(510) 00:09:39.476 fused_ordering(511) 00:09:39.476 fused_ordering(512) 00:09:39.476 fused_ordering(513) 00:09:39.476 fused_ordering(514) 00:09:39.476 fused_ordering(515) 00:09:39.476 fused_ordering(516) 00:09:39.476 fused_ordering(517) 00:09:39.476 fused_ordering(518) 00:09:39.476 fused_ordering(519) 00:09:39.476 fused_ordering(520) 00:09:39.476 fused_ordering(521) 00:09:39.476 fused_ordering(522) 00:09:39.476 fused_ordering(523) 00:09:39.476 fused_ordering(524) 00:09:39.476 fused_ordering(525) 00:09:39.476 fused_ordering(526) 00:09:39.476 fused_ordering(527) 00:09:39.476 fused_ordering(528) 00:09:39.476 fused_ordering(529) 00:09:39.476 fused_ordering(530) 00:09:39.476 fused_ordering(531) 00:09:39.476 fused_ordering(532) 00:09:39.477 fused_ordering(533) 00:09:39.477 fused_ordering(534) 00:09:39.477 fused_ordering(535) 00:09:39.477 fused_ordering(536) 00:09:39.477 fused_ordering(537) 00:09:39.477 fused_ordering(538) 00:09:39.477 fused_ordering(539) 00:09:39.477 fused_ordering(540) 00:09:39.477 fused_ordering(541) 00:09:39.477 fused_ordering(542) 00:09:39.477 fused_ordering(543) 00:09:39.477 fused_ordering(544) 00:09:39.477 fused_ordering(545) 00:09:39.477 fused_ordering(546) 00:09:39.477 fused_ordering(547) 00:09:39.477 fused_ordering(548) 00:09:39.477 fused_ordering(549) 00:09:39.477 fused_ordering(550) 00:09:39.477 fused_ordering(551) 00:09:39.477 fused_ordering(552) 00:09:39.477 fused_ordering(553) 00:09:39.477 fused_ordering(554) 00:09:39.477 fused_ordering(555) 00:09:39.477 fused_ordering(556) 00:09:39.477 fused_ordering(557) 00:09:39.477 fused_ordering(558) 00:09:39.477 fused_ordering(559) 00:09:39.477 fused_ordering(560) 00:09:39.477 fused_ordering(561) 00:09:39.477 fused_ordering(562) 00:09:39.477 fused_ordering(563) 00:09:39.477 fused_ordering(564) 00:09:39.477 fused_ordering(565) 00:09:39.477 fused_ordering(566) 00:09:39.477 fused_ordering(567) 00:09:39.477 fused_ordering(568) 00:09:39.477 fused_ordering(569) 00:09:39.477 fused_ordering(570) 00:09:39.477 fused_ordering(571) 00:09:39.477 fused_ordering(572) 00:09:39.477 fused_ordering(573) 00:09:39.477 fused_ordering(574) 00:09:39.477 fused_ordering(575) 00:09:39.477 fused_ordering(576) 00:09:39.477 fused_ordering(577) 00:09:39.477 fused_ordering(578) 00:09:39.477 fused_ordering(579) 00:09:39.477 fused_ordering(580) 00:09:39.477 fused_ordering(581) 00:09:39.477 fused_ordering(582) 00:09:39.477 fused_ordering(583) 00:09:39.477 fused_ordering(584) 00:09:39.477 fused_ordering(585) 00:09:39.477 fused_ordering(586) 00:09:39.477 fused_ordering(587) 00:09:39.477 fused_ordering(588) 00:09:39.477 fused_ordering(589) 00:09:39.477 fused_ordering(590) 00:09:39.477 fused_ordering(591) 00:09:39.477 fused_ordering(592) 00:09:39.477 fused_ordering(593) 00:09:39.477 fused_ordering(594) 00:09:39.477 fused_ordering(595) 00:09:39.477 fused_ordering(596) 00:09:39.477 fused_ordering(597) 00:09:39.477 fused_ordering(598) 00:09:39.477 fused_ordering(599) 00:09:39.477 fused_ordering(600) 00:09:39.477 fused_ordering(601) 00:09:39.477 fused_ordering(602) 00:09:39.477 fused_ordering(603) 00:09:39.477 fused_ordering(604) 00:09:39.477 fused_ordering(605) 00:09:39.477 fused_ordering(606) 00:09:39.477 fused_ordering(607) 00:09:39.477 fused_ordering(608) 00:09:39.477 fused_ordering(609) 00:09:39.477 fused_ordering(610) 00:09:39.477 fused_ordering(611) 00:09:39.477 fused_ordering(612) 00:09:39.477 fused_ordering(613) 00:09:39.477 fused_ordering(614) 00:09:39.477 fused_ordering(615) 00:09:40.045 fused_ordering(616) 00:09:40.045 fused_ordering(617) 00:09:40.045 fused_ordering(618) 00:09:40.045 fused_ordering(619) 00:09:40.045 fused_ordering(620) 00:09:40.045 fused_ordering(621) 00:09:40.045 fused_ordering(622) 00:09:40.045 fused_ordering(623) 00:09:40.045 fused_ordering(624) 00:09:40.045 fused_ordering(625) 00:09:40.045 fused_ordering(626) 00:09:40.045 fused_ordering(627) 00:09:40.045 fused_ordering(628) 00:09:40.045 fused_ordering(629) 00:09:40.045 fused_ordering(630) 00:09:40.045 fused_ordering(631) 00:09:40.045 fused_ordering(632) 00:09:40.045 fused_ordering(633) 00:09:40.045 fused_ordering(634) 00:09:40.045 fused_ordering(635) 00:09:40.045 fused_ordering(636) 00:09:40.045 fused_ordering(637) 00:09:40.045 fused_ordering(638) 00:09:40.045 fused_ordering(639) 00:09:40.045 fused_ordering(640) 00:09:40.045 fused_ordering(641) 00:09:40.045 fused_ordering(642) 00:09:40.045 fused_ordering(643) 00:09:40.045 fused_ordering(644) 00:09:40.045 fused_ordering(645) 00:09:40.045 fused_ordering(646) 00:09:40.045 fused_ordering(647) 00:09:40.045 fused_ordering(648) 00:09:40.045 fused_ordering(649) 00:09:40.045 fused_ordering(650) 00:09:40.045 fused_ordering(651) 00:09:40.045 fused_ordering(652) 00:09:40.045 fused_ordering(653) 00:09:40.045 fused_ordering(654) 00:09:40.045 fused_ordering(655) 00:09:40.045 fused_ordering(656) 00:09:40.045 fused_ordering(657) 00:09:40.045 fused_ordering(658) 00:09:40.045 fused_ordering(659) 00:09:40.045 fused_ordering(660) 00:09:40.045 fused_ordering(661) 00:09:40.045 fused_ordering(662) 00:09:40.045 fused_ordering(663) 00:09:40.045 fused_ordering(664) 00:09:40.045 fused_ordering(665) 00:09:40.045 fused_ordering(666) 00:09:40.045 fused_ordering(667) 00:09:40.045 fused_ordering(668) 00:09:40.045 fused_ordering(669) 00:09:40.045 fused_ordering(670) 00:09:40.045 fused_ordering(671) 00:09:40.045 fused_ordering(672) 00:09:40.045 fused_ordering(673) 00:09:40.045 fused_ordering(674) 00:09:40.045 fused_ordering(675) 00:09:40.045 fused_ordering(676) 00:09:40.045 fused_ordering(677) 00:09:40.045 fused_ordering(678) 00:09:40.045 fused_ordering(679) 00:09:40.045 fused_ordering(680) 00:09:40.045 fused_ordering(681) 00:09:40.045 fused_ordering(682) 00:09:40.045 fused_ordering(683) 00:09:40.045 fused_ordering(684) 00:09:40.045 fused_ordering(685) 00:09:40.045 fused_ordering(686) 00:09:40.045 fused_ordering(687) 00:09:40.045 fused_ordering(688) 00:09:40.045 fused_ordering(689) 00:09:40.045 fused_ordering(690) 00:09:40.045 fused_ordering(691) 00:09:40.045 fused_ordering(692) 00:09:40.045 fused_ordering(693) 00:09:40.045 fused_ordering(694) 00:09:40.045 fused_ordering(695) 00:09:40.045 fused_ordering(696) 00:09:40.045 fused_ordering(697) 00:09:40.045 fused_ordering(698) 00:09:40.045 fused_ordering(699) 00:09:40.045 fused_ordering(700) 00:09:40.045 fused_ordering(701) 00:09:40.045 fused_ordering(702) 00:09:40.045 fused_ordering(703) 00:09:40.045 fused_ordering(704) 00:09:40.045 fused_ordering(705) 00:09:40.045 fused_ordering(706) 00:09:40.045 fused_ordering(707) 00:09:40.045 fused_ordering(708) 00:09:40.045 fused_ordering(709) 00:09:40.045 fused_ordering(710) 00:09:40.045 fused_ordering(711) 00:09:40.045 fused_ordering(712) 00:09:40.045 fused_ordering(713) 00:09:40.045 fused_ordering(714) 00:09:40.045 fused_ordering(715) 00:09:40.045 fused_ordering(716) 00:09:40.045 fused_ordering(717) 00:09:40.045 fused_ordering(718) 00:09:40.045 fused_ordering(719) 00:09:40.045 fused_ordering(720) 00:09:40.045 fused_ordering(721) 00:09:40.045 fused_ordering(722) 00:09:40.045 fused_ordering(723) 00:09:40.045 fused_ordering(724) 00:09:40.045 fused_ordering(725) 00:09:40.045 fused_ordering(726) 00:09:40.045 fused_ordering(727) 00:09:40.045 fused_ordering(728) 00:09:40.045 fused_ordering(729) 00:09:40.045 fused_ordering(730) 00:09:40.045 fused_ordering(731) 00:09:40.045 fused_ordering(732) 00:09:40.045 fused_ordering(733) 00:09:40.045 fused_ordering(734) 00:09:40.045 fused_ordering(735) 00:09:40.045 fused_ordering(736) 00:09:40.045 fused_ordering(737) 00:09:40.045 fused_ordering(738) 00:09:40.045 fused_ordering(739) 00:09:40.045 fused_ordering(740) 00:09:40.045 fused_ordering(741) 00:09:40.045 fused_ordering(742) 00:09:40.045 fused_ordering(743) 00:09:40.045 fused_ordering(744) 00:09:40.045 fused_ordering(745) 00:09:40.045 fused_ordering(746) 00:09:40.045 fused_ordering(747) 00:09:40.045 fused_ordering(748) 00:09:40.045 fused_ordering(749) 00:09:40.045 fused_ordering(750) 00:09:40.045 fused_ordering(751) 00:09:40.045 fused_ordering(752) 00:09:40.045 fused_ordering(753) 00:09:40.045 fused_ordering(754) 00:09:40.045 fused_ordering(755) 00:09:40.045 fused_ordering(756) 00:09:40.045 fused_ordering(757) 00:09:40.045 fused_ordering(758) 00:09:40.045 fused_ordering(759) 00:09:40.045 fused_ordering(760) 00:09:40.045 fused_ordering(761) 00:09:40.045 fused_ordering(762) 00:09:40.045 fused_ordering(763) 00:09:40.045 fused_ordering(764) 00:09:40.045 fused_ordering(765) 00:09:40.045 fused_ordering(766) 00:09:40.045 fused_ordering(767) 00:09:40.045 fused_ordering(768) 00:09:40.045 fused_ordering(769) 00:09:40.045 fused_ordering(770) 00:09:40.045 fused_ordering(771) 00:09:40.045 fused_ordering(772) 00:09:40.045 fused_ordering(773) 00:09:40.045 fused_ordering(774) 00:09:40.045 fused_ordering(775) 00:09:40.045 fused_ordering(776) 00:09:40.045 fused_ordering(777) 00:09:40.045 fused_ordering(778) 00:09:40.045 fused_ordering(779) 00:09:40.045 fused_ordering(780) 00:09:40.045 fused_ordering(781) 00:09:40.045 fused_ordering(782) 00:09:40.045 fused_ordering(783) 00:09:40.045 fused_ordering(784) 00:09:40.045 fused_ordering(785) 00:09:40.045 fused_ordering(786) 00:09:40.045 fused_ordering(787) 00:09:40.045 fused_ordering(788) 00:09:40.045 fused_ordering(789) 00:09:40.045 fused_ordering(790) 00:09:40.045 fused_ordering(791) 00:09:40.045 fused_ordering(792) 00:09:40.045 fused_ordering(793) 00:09:40.045 fused_ordering(794) 00:09:40.045 fused_ordering(795) 00:09:40.045 fused_ordering(796) 00:09:40.045 fused_ordering(797) 00:09:40.045 fused_ordering(798) 00:09:40.045 fused_ordering(799) 00:09:40.045 fused_ordering(800) 00:09:40.045 fused_ordering(801) 00:09:40.045 fused_ordering(802) 00:09:40.045 fused_ordering(803) 00:09:40.045 fused_ordering(804) 00:09:40.045 fused_ordering(805) 00:09:40.045 fused_ordering(806) 00:09:40.045 fused_ordering(807) 00:09:40.045 fused_ordering(808) 00:09:40.045 fused_ordering(809) 00:09:40.045 fused_ordering(810) 00:09:40.045 fused_ordering(811) 00:09:40.045 fused_ordering(812) 00:09:40.045 fused_ordering(813) 00:09:40.045 fused_ordering(814) 00:09:40.045 fused_ordering(815) 00:09:40.045 fused_ordering(816) 00:09:40.045 fused_ordering(817) 00:09:40.045 fused_ordering(818) 00:09:40.045 fused_ordering(819) 00:09:40.045 fused_ordering(820) 00:09:40.613 fused_ordering(821) 00:09:40.613 fused_ordering(822) 00:09:40.613 fused_ordering(823) 00:09:40.613 fused_ordering(824) 00:09:40.613 fused_ordering(825) 00:09:40.613 fused_ordering(826) 00:09:40.613 fused_ordering(827) 00:09:40.613 fused_ordering(828) 00:09:40.613 fused_ordering(829) 00:09:40.613 fused_ordering(830) 00:09:40.613 fused_ordering(831) 00:09:40.613 fused_ordering(832) 00:09:40.613 fused_ordering(833) 00:09:40.613 fused_ordering(834) 00:09:40.613 fused_ordering(835) 00:09:40.613 fused_ordering(836) 00:09:40.613 fused_ordering(837) 00:09:40.613 fused_ordering(838) 00:09:40.613 fused_ordering(839) 00:09:40.613 fused_ordering(840) 00:09:40.613 fused_ordering(841) 00:09:40.613 fused_ordering(842) 00:09:40.613 fused_ordering(843) 00:09:40.613 fused_ordering(844) 00:09:40.613 fused_ordering(845) 00:09:40.613 fused_ordering(846) 00:09:40.613 fused_ordering(847) 00:09:40.613 fused_ordering(848) 00:09:40.613 fused_ordering(849) 00:09:40.613 fused_ordering(850) 00:09:40.613 fused_ordering(851) 00:09:40.613 fused_ordering(852) 00:09:40.613 fused_ordering(853) 00:09:40.613 fused_ordering(854) 00:09:40.613 fused_ordering(855) 00:09:40.613 fused_ordering(856) 00:09:40.613 fused_ordering(857) 00:09:40.613 fused_ordering(858) 00:09:40.613 fused_ordering(859) 00:09:40.613 fused_ordering(860) 00:09:40.613 fused_ordering(861) 00:09:40.613 fused_ordering(862) 00:09:40.613 fused_ordering(863) 00:09:40.613 fused_ordering(864) 00:09:40.613 fused_ordering(865) 00:09:40.613 fused_ordering(866) 00:09:40.613 fused_ordering(867) 00:09:40.613 fused_ordering(868) 00:09:40.613 fused_ordering(869) 00:09:40.613 fused_ordering(870) 00:09:40.613 fused_ordering(871) 00:09:40.613 fused_ordering(872) 00:09:40.613 fused_ordering(873) 00:09:40.613 fused_ordering(874) 00:09:40.613 fused_ordering(875) 00:09:40.613 fused_ordering(876) 00:09:40.613 fused_ordering(877) 00:09:40.613 fused_ordering(878) 00:09:40.613 fused_ordering(879) 00:09:40.613 fused_ordering(880) 00:09:40.613 fused_ordering(881) 00:09:40.613 fused_ordering(882) 00:09:40.613 fused_ordering(883) 00:09:40.613 fused_ordering(884) 00:09:40.613 fused_ordering(885) 00:09:40.613 fused_ordering(886) 00:09:40.613 fused_ordering(887) 00:09:40.613 fused_ordering(888) 00:09:40.613 fused_ordering(889) 00:09:40.613 fused_ordering(890) 00:09:40.613 fused_ordering(891) 00:09:40.613 fused_ordering(892) 00:09:40.613 fused_ordering(893) 00:09:40.613 fused_ordering(894) 00:09:40.613 fused_ordering(895) 00:09:40.613 fused_ordering(896) 00:09:40.613 fused_ordering(897) 00:09:40.613 fused_ordering(898) 00:09:40.613 fused_ordering(899) 00:09:40.613 fused_ordering(900) 00:09:40.613 fused_ordering(901) 00:09:40.613 fused_ordering(902) 00:09:40.613 fused_ordering(903) 00:09:40.613 fused_ordering(904) 00:09:40.613 fused_ordering(905) 00:09:40.613 fused_ordering(906) 00:09:40.613 fused_ordering(907) 00:09:40.613 fused_ordering(908) 00:09:40.613 fused_ordering(909) 00:09:40.613 fused_ordering(910) 00:09:40.613 fused_ordering(911) 00:09:40.613 fused_ordering(912) 00:09:40.613 fused_ordering(913) 00:09:40.613 fused_ordering(914) 00:09:40.613 fused_ordering(915) 00:09:40.613 fused_ordering(916) 00:09:40.613 fused_ordering(917) 00:09:40.613 fused_ordering(918) 00:09:40.613 fused_ordering(919) 00:09:40.613 fused_ordering(920) 00:09:40.613 fused_ordering(921) 00:09:40.613 fused_ordering(922) 00:09:40.613 fused_ordering(923) 00:09:40.613 fused_ordering(924) 00:09:40.613 fused_ordering(925) 00:09:40.613 fused_ordering(926) 00:09:40.613 fused_ordering(927) 00:09:40.613 fused_ordering(928) 00:09:40.613 fused_ordering(929) 00:09:40.613 fused_ordering(930) 00:09:40.613 fused_ordering(931) 00:09:40.613 fused_ordering(932) 00:09:40.613 fused_ordering(933) 00:09:40.613 fused_ordering(934) 00:09:40.613 fused_ordering(935) 00:09:40.613 fused_ordering(936) 00:09:40.613 fused_ordering(937) 00:09:40.613 fused_ordering(938) 00:09:40.613 fused_ordering(939) 00:09:40.613 fused_ordering(940) 00:09:40.613 fused_ordering(941) 00:09:40.613 fused_ordering(942) 00:09:40.613 fused_ordering(943) 00:09:40.613 fused_ordering(944) 00:09:40.613 fused_ordering(945) 00:09:40.613 fused_ordering(946) 00:09:40.613 fused_ordering(947) 00:09:40.613 fused_ordering(948) 00:09:40.613 fused_ordering(949) 00:09:40.613 fused_ordering(950) 00:09:40.613 fused_ordering(951) 00:09:40.613 fused_ordering(952) 00:09:40.613 fused_ordering(953) 00:09:40.613 fused_ordering(954) 00:09:40.613 fused_ordering(955) 00:09:40.613 fused_ordering(956) 00:09:40.613 fused_ordering(957) 00:09:40.613 fused_ordering(958) 00:09:40.613 fused_ordering(959) 00:09:40.613 fused_ordering(960) 00:09:40.613 fused_ordering(961) 00:09:40.613 fused_ordering(962) 00:09:40.613 fused_ordering(963) 00:09:40.613 fused_ordering(964) 00:09:40.613 fused_ordering(965) 00:09:40.613 fused_ordering(966) 00:09:40.613 fused_ordering(967) 00:09:40.613 fused_ordering(968) 00:09:40.613 fused_ordering(969) 00:09:40.613 fused_ordering(970) 00:09:40.613 fused_ordering(971) 00:09:40.613 fused_ordering(972) 00:09:40.613 fused_ordering(973) 00:09:40.613 fused_ordering(974) 00:09:40.613 fused_ordering(975) 00:09:40.613 fused_ordering(976) 00:09:40.613 fused_ordering(977) 00:09:40.613 fused_ordering(978) 00:09:40.613 fused_ordering(979) 00:09:40.613 fused_ordering(980) 00:09:40.613 fused_ordering(981) 00:09:40.613 fused_ordering(982) 00:09:40.613 fused_ordering(983) 00:09:40.613 fused_ordering(984) 00:09:40.613 fused_ordering(985) 00:09:40.613 fused_ordering(986) 00:09:40.613 fused_ordering(987) 00:09:40.613 fused_ordering(988) 00:09:40.613 fused_ordering(989) 00:09:40.613 fused_ordering(990) 00:09:40.613 fused_ordering(991) 00:09:40.613 fused_ordering(992) 00:09:40.613 fused_ordering(993) 00:09:40.613 fused_ordering(994) 00:09:40.613 fused_ordering(995) 00:09:40.613 fused_ordering(996) 00:09:40.613 fused_ordering(997) 00:09:40.613 fused_ordering(998) 00:09:40.613 fused_ordering(999) 00:09:40.613 fused_ordering(1000) 00:09:40.613 fused_ordering(1001) 00:09:40.614 fused_ordering(1002) 00:09:40.614 fused_ordering(1003) 00:09:40.614 fused_ordering(1004) 00:09:40.614 fused_ordering(1005) 00:09:40.614 fused_ordering(1006) 00:09:40.614 fused_ordering(1007) 00:09:40.614 fused_ordering(1008) 00:09:40.614 fused_ordering(1009) 00:09:40.614 fused_ordering(1010) 00:09:40.614 fused_ordering(1011) 00:09:40.614 fused_ordering(1012) 00:09:40.614 fused_ordering(1013) 00:09:40.614 fused_ordering(1014) 00:09:40.614 fused_ordering(1015) 00:09:40.614 fused_ordering(1016) 00:09:40.614 fused_ordering(1017) 00:09:40.614 fused_ordering(1018) 00:09:40.614 fused_ordering(1019) 00:09:40.614 fused_ordering(1020) 00:09:40.614 fused_ordering(1021) 00:09:40.614 fused_ordering(1022) 00:09:40.614 fused_ordering(1023) 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.614 rmmod nvme_tcp 00:09:40.614 rmmod nvme_fabrics 00:09:40.614 rmmod nvme_keyring 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71327 ']' 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71327 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71327 ']' 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71327 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71327 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71327' 00:09:40.614 killing process with pid 71327 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71327 00:09:40.614 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71327 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.892 00:09:40.892 real 0m4.173s 00:09:40.892 user 0m5.075s 00:09:40.892 sys 0m1.412s 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.892 14:49:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 ************************************ 00:09:40.892 END TEST nvmf_fused_ordering 00:09:40.892 ************************************ 00:09:40.892 14:49:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.892 14:49:19 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.892 14:49:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.892 14:49:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.892 14:49:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 ************************************ 00:09:40.892 START TEST nvmf_delete_subsystem 00:09:40.892 ************************************ 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.892 * Looking for test storage... 00:09:40.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.892 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:41.150 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:41.151 Cannot find device "nvmf_tgt_br" 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.151 Cannot find device "nvmf_tgt_br2" 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:41.151 Cannot find device "nvmf_tgt_br" 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:41.151 Cannot find device "nvmf_tgt_br2" 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:41.151 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:41.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:41.410 00:09:41.410 --- 10.0.0.2 ping statistics --- 00:09:41.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.410 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:41.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:41.410 00:09:41.410 --- 10.0.0.3 ping statistics --- 00:09:41.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.410 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:41.410 00:09:41.410 --- 10.0.0.1 ping statistics --- 00:09:41.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.410 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.410 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71585 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71585 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71585 ']' 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.411 14:49:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.411 [2024-07-12 14:49:20.020049] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:09:41.411 [2024-07-12 14:49:20.020147] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.669 [2024-07-12 14:49:20.161160] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.669 [2024-07-12 14:49:20.230726] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.669 [2024-07-12 14:49:20.230779] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.669 [2024-07-12 14:49:20.230793] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.669 [2024-07-12 14:49:20.230806] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.669 [2024-07-12 14:49:20.230821] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.669 [2024-07-12 14:49:20.230975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.669 [2024-07-12 14:49:20.230988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 [2024-07-12 14:49:21.091357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 [2024-07-12 14:49:21.107451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 NULL1 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 Delay0 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71636 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:42.603 14:49:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:42.861 [2024-07-12 14:49:21.312142] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:44.763 14:49:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.763 14:49:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.763 14:49:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 [2024-07-12 14:49:23.348465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3880 is same with the state(5) to be set 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 starting I/O failed: -6 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.763 [2024-07-12 14:49:23.349800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa0c000d370 is same with the state(5) to be set 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Read completed with error (sct=0, sc=8) 00:09:44.763 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Write completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:44.764 Read completed with error (sct=0, sc=8) 00:09:45.700 [2024-07-12 14:49:24.329311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1510 is same with the state(5) to be set 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 [2024-07-12 14:49:24.348204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa0c000d020 is same with the state(5) to be set 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 [2024-07-12 14:49:24.348436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa0c000d6c0 is same with the state(5) to be set 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 [2024-07-12 14:49:24.350067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3530 is same with the state(5) to be set 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Read completed with error (sct=0, sc=8) 00:09:45.700 Write completed with error (sct=0, sc=8) 00:09:45.701 Write completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Write completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Write completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Write completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 Read completed with error (sct=0, sc=8) 00:09:45.701 [2024-07-12 14:49:24.350259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3bd0 is same with the state(5) to be set 00:09:45.701 Initializing NVMe Controllers 00:09:45.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.701 Controller IO queue size 128, less than required. 00:09:45.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:45.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:45.701 Initialization complete. Launching workers. 00:09:45.701 ======================================================== 00:09:45.701 Latency(us) 00:09:45.701 Device Information : IOPS MiB/s Average min max 00:09:45.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.76 0.08 901620.59 397.09 1011143.46 00:09:45.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.79 0.08 969148.40 447.48 2001940.26 00:09:45.701 ======================================================== 00:09:45.701 Total : 329.56 0.16 934977.70 397.09 2001940.26 00:09:45.701 00:09:45.701 [2024-07-12 14:49:24.350948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1510 (9): Bad file descriptor 00:09:45.701 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:45.701 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.701 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:45.701 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71636 00:09:45.701 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71636 00:09:46.268 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71636) - No such process 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71636 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71636 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71636 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 [2024-07-12 14:49:24.874828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71682 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:46.268 14:49:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:46.526 [2024-07-12 14:49:25.064964] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:46.785 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.785 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:46.785 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.352 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.352 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:47.352 14:49:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.919 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.919 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:47.919 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.486 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.486 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:48.486 14:49:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.053 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.053 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:49.053 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.312 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.312 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:49.312 14:49:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.570 Initializing NVMe Controllers 00:09:49.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:49.570 Controller IO queue size 128, less than required. 00:09:49.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:49.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:49.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:49.570 Initialization complete. Launching workers. 00:09:49.570 ======================================================== 00:09:49.570 Latency(us) 00:09:49.570 Device Information : IOPS MiB/s Average min max 00:09:49.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003624.83 1000146.57 1041777.91 00:09:49.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005810.87 1000779.97 1013531.95 00:09:49.570 ======================================================== 00:09:49.570 Total : 256.00 0.12 1004717.85 1000146.57 1041777.91 00:09:49.570 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71682 00:09:49.831 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71682) - No such process 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71682 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.831 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.831 rmmod nvme_tcp 00:09:50.104 rmmod nvme_fabrics 00:09:50.104 rmmod nvme_keyring 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71585 ']' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71585 ']' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.104 killing process with pid 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71585' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71585 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:50.104 00:09:50.104 real 0m9.280s 00:09:50.104 user 0m28.755s 00:09:50.104 sys 0m1.508s 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.104 14:49:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.104 ************************************ 00:09:50.104 END TEST nvmf_delete_subsystem 00:09:50.104 ************************************ 00:09:50.373 14:49:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:50.373 14:49:28 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:50.373 14:49:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.373 14:49:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.373 14:49:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.373 ************************************ 00:09:50.373 START TEST nvmf_ns_masking 00:09:50.373 ************************************ 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:50.373 * Looking for test storage... 00:09:50.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.373 14:49:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d85435c9-7556-429a-b81c-50be9411a625 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ffb33740-806e-45fb-93cb-bc44b09e1209 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0038e047-38c4-4c47-ab78-076693f77092 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:50.374 Cannot find device "nvmf_tgt_br" 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.374 Cannot find device "nvmf_tgt_br2" 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:50.374 Cannot find device "nvmf_tgt_br" 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:50.374 Cannot find device "nvmf_tgt_br2" 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:50.374 14:49:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:50.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:50.633 00:09:50.633 --- 10.0.0.2 ping statistics --- 00:09:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.633 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:50.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:50.633 00:09:50.633 --- 10.0.0.3 ping statistics --- 00:09:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.633 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:50.633 00:09:50.633 --- 10.0.0.1 ping statistics --- 00:09:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.633 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.633 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.892 14:49:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71916 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71916 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71916 ']' 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.893 14:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 [2024-07-12 14:49:29.359393] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:09:50.893 [2024-07-12 14:49:29.359546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.893 [2024-07-12 14:49:29.500882] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.151 [2024-07-12 14:49:29.570606] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.151 [2024-07-12 14:49:29.570672] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.151 [2024-07-12 14:49:29.570686] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.151 [2024-07-12 14:49:29.570697] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.151 [2024-07-12 14:49:29.570705] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.151 [2024-07-12 14:49:29.570734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.714 14:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.714 14:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:51.714 14:49:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.714 14:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.714 14:49:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:51.971 14:49:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.971 14:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:52.229 [2024-07-12 14:49:30.653463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.229 14:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:52.229 14:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:52.229 14:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:52.487 Malloc1 00:09:52.487 14:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:52.744 Malloc2 00:09:52.744 14:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.001 14:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:53.259 14:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.518 [2024-07-12 14:49:32.036981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0038e047-38c4-4c47-ab78-076693f77092 -a 10.0.0.2 -s 4420 -i 4 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:53.518 14:49:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:56.044 [ 0]:0x1 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c3f7a9c22a7047638607cc699b7e0ec4 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c3f7a9c22a7047638607cc699b7e0ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:56.044 [ 0]:0x1 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c3f7a9c22a7047638607cc699b7e0ec4 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c3f7a9c22a7047638607cc699b7e0ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:56.044 [ 1]:0x2 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:56.044 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.302 14:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.560 14:49:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:56.818 14:49:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:56.818 14:49:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0038e047-38c4-4c47-ab78-076693f77092 -a 10.0.0.2 -s 4420 -i 4 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:57.076 14:49:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.976 [ 0]:0x2 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.976 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:59.234 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:09:59.234 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:59.234 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:59.492 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:59.492 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:59.492 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:59.492 [ 0]:0x1 00:09:59.492 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:59.492 14:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c3f7a9c22a7047638607cc699b7e0ec4 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c3f7a9c22a7047638607cc699b7e0ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:59.492 [ 1]:0x2 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:59.492 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:59.750 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.007 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:00.008 [ 0]:0x2 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.008 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0038e047-38c4-4c47-ab78-076693f77092 -a 10.0.0.2 -s 4420 -i 4 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:00.266 14:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.791 [ 0]:0x1 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:02.791 14:49:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c3f7a9c22a7047638607cc699b7e0ec4 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c3f7a9c22a7047638607cc699b7e0ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:02.791 [ 1]:0x2 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:10:02.791 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:02.792 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:03.050 [ 0]:0x2 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:03.050 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:03.308 [2024-07-12 14:49:41.779578] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:03.308 2024/07/12 14:49:41 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:10:03.308 request: 00:10:03.308 { 00:10:03.308 "method": "nvmf_ns_remove_host", 00:10:03.308 "params": { 00:10:03.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.308 "nsid": 2, 00:10:03.308 "host": "nqn.2016-06.io.spdk:host1" 00:10:03.308 } 00:10:03.308 } 00:10:03.308 Got JSON-RPC error response 00:10:03.308 GoRPCClient: error on JSON-RPC call 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:03.308 [ 0]:0x2 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=286580372af044a99ab6f0ee939b39fb 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 286580372af044a99ab6f0ee939b39fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72303 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72303 /var/tmp/host.sock 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72303 ']' 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:03.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:03.308 14:49:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:03.566 [2024-07-12 14:49:42.019340] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:03.566 [2024-07-12 14:49:42.019974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72303 ] 00:10:03.566 [2024-07-12 14:49:42.158301] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.824 [2024-07-12 14:49:42.226368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.389 14:49:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.389 14:49:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:04.389 14:49:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.646 14:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.903 14:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d85435c9-7556-429a-b81c-50be9411a625 00:10:04.903 14:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:04.903 14:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D85435C97556429AB81C50BE9411A625 -i 00:10:05.161 14:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ffb33740-806e-45fb-93cb-bc44b09e1209 00:10:05.161 14:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:05.161 14:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FFB33740806E45FB93CBBC44B09E1209 -i 00:10:05.420 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:05.678 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:05.937 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:05.937 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:06.503 nvme0n1 00:10:06.503 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:06.503 14:49:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:06.761 nvme1n2 00:10:06.761 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:06.761 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:06.761 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:06.761 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:06.761 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:07.328 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:07.328 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:07.328 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:07.328 14:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:07.586 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d85435c9-7556-429a-b81c-50be9411a625 == \d\8\5\4\3\5\c\9\-\7\5\5\6\-\4\2\9\a\-\b\8\1\c\-\5\0\b\e\9\4\1\1\a\6\2\5 ]] 00:10:07.586 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:07.586 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:07.586 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ffb33740-806e-45fb-93cb-bc44b09e1209 == \f\f\b\3\3\7\4\0\-\8\0\6\e\-\4\5\f\b\-\9\3\c\b\-\b\c\4\4\b\0\9\e\1\2\0\9 ]] 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72303 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72303 ']' 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72303 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72303 00:10:07.845 killing process with pid 72303 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72303' 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72303 00:10:07.845 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72303 00:10:08.104 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:08.362 rmmod nvme_tcp 00:10:08.362 rmmod nvme_fabrics 00:10:08.362 rmmod nvme_keyring 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71916 ']' 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71916 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71916 ']' 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71916 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71916 00:10:08.362 killing process with pid 71916 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:08.362 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:08.363 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71916' 00:10:08.363 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71916 00:10:08.363 14:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71916 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:08.622 00:10:08.622 real 0m18.410s 00:10:08.622 user 0m29.847s 00:10:08.622 sys 0m2.593s 00:10:08.622 ************************************ 00:10:08.622 END TEST nvmf_ns_masking 00:10:08.622 ************************************ 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.622 14:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:08.622 14:49:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.622 14:49:47 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:10:08.622 14:49:47 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:08.622 14:49:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:08.622 14:49:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.622 14:49:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.622 14:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.622 ************************************ 00:10:08.622 START TEST nvmf_host_management 00:10:08.622 ************************************ 00:10:08.622 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:08.882 * Looking for test storage... 00:10:08.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:08.882 Cannot find device "nvmf_tgt_br" 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.882 Cannot find device "nvmf_tgt_br2" 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:08.882 Cannot find device "nvmf_tgt_br" 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:08.882 Cannot find device "nvmf_tgt_br2" 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:08.882 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.883 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:09.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:09.141 00:10:09.141 --- 10.0.0.2 ping statistics --- 00:10:09.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.141 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:09.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:09.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:10:09.141 00:10:09.141 --- 10.0.0.3 ping statistics --- 00:10:09.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.141 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:09.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:09.141 00:10:09.141 --- 10.0.0.1 ping statistics --- 00:10:09.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.141 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.141 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72658 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72658 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72658 ']' 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.142 14:49:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 [2024-07-12 14:49:47.772783] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:09.142 [2024-07-12 14:49:47.772888] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.399 [2024-07-12 14:49:47.911624] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.399 [2024-07-12 14:49:47.981707] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.399 [2024-07-12 14:49:47.981756] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.399 [2024-07-12 14:49:47.981769] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.399 [2024-07-12 14:49:47.981779] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.399 [2024-07-12 14:49:47.981787] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.399 [2024-07-12 14:49:47.983550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.399 [2024-07-12 14:49:47.983645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.399 [2024-07-12 14:49:47.983859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.399 [2024-07-12 14:49:47.983867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 [2024-07-12 14:49:48.781871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 Malloc0 00:10:10.333 [2024-07-12 14:49:48.841367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72730 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72730 /var/tmp/bdevperf.sock 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72730 ']' 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:10.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:10.333 { 00:10:10.333 "params": { 00:10:10.333 "name": "Nvme$subsystem", 00:10:10.333 "trtype": "$TEST_TRANSPORT", 00:10:10.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.333 "adrfam": "ipv4", 00:10:10.333 "trsvcid": "$NVMF_PORT", 00:10:10.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.333 "hdgst": ${hdgst:-false}, 00:10:10.333 "ddgst": ${ddgst:-false} 00:10:10.333 }, 00:10:10.333 "method": "bdev_nvme_attach_controller" 00:10:10.333 } 00:10:10.333 EOF 00:10:10.333 )") 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:10.333 14:49:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:10.333 "params": { 00:10:10.333 "name": "Nvme0", 00:10:10.333 "trtype": "tcp", 00:10:10.333 "traddr": "10.0.0.2", 00:10:10.333 "adrfam": "ipv4", 00:10:10.333 "trsvcid": "4420", 00:10:10.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:10.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:10.333 "hdgst": false, 00:10:10.333 "ddgst": false 00:10:10.333 }, 00:10:10.333 "method": "bdev_nvme_attach_controller" 00:10:10.333 }' 00:10:10.333 [2024-07-12 14:49:48.945814] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:10.333 [2024-07-12 14:49:48.945906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72730 ] 00:10:10.592 [2024-07-12 14:49:49.083546] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.592 [2024-07-12 14:49:49.143760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.850 Running I/O for 10 seconds... 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.419 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 [2024-07-12 14:49:49.984984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.419 [2024-07-12 14:49:49.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.419 [2024-07-12 14:49:49.985365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.985981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.985990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.420 [2024-07-12 14:49:49.986254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.420 [2024-07-12 14:49:49.986266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.421 [2024-07-12 14:49:49.986444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.421 [2024-07-12 14:49:49.986455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21349c0 is same with the state(5) to be set 00:10:11.421 [2024-07-12 14:49:49.986506] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21349c0 was disconnected and freed. reset controller. 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.421 [2024-07-12 14:49:49.987703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:11.421 task offset: 8064 on job bdev=Nvme0n1 fails 00:10:11.421 00:10:11.421 Latency(us) 00:10:11.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:11.421 Job: Nvme0n1 ended in about 0.70 seconds with error 00:10:11.421 Verification LBA range: start 0x0 length 0x400 00:10:11.421 Nvme0n1 : 0.70 1457.77 91.11 91.11 0.00 40313.70 5928.03 36461.85 00:10:11.421 =================================================================================================================== 00:10:11.421 Total : 1457.77 91.11 91.11 0.00 40313.70 5928.03 36461.85 00:10:11.421 [2024-07-12 14:49:49.989975] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.421 [2024-07-12 14:49:49.990013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134c90 (9): Bad file descriptor 00:10:11.421 [2024-07-12 14:49:49.993798] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.421 14:49:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:12.354 14:49:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72730 00:10:12.354 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72730) - No such process 00:10:12.354 14:49:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:12.354 14:49:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.354 { 00:10:12.354 "params": { 00:10:12.354 "name": "Nvme$subsystem", 00:10:12.354 "trtype": "$TEST_TRANSPORT", 00:10:12.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.354 "adrfam": "ipv4", 00:10:12.354 "trsvcid": "$NVMF_PORT", 00:10:12.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.354 "hdgst": ${hdgst:-false}, 00:10:12.354 "ddgst": ${ddgst:-false} 00:10:12.354 }, 00:10:12.354 "method": "bdev_nvme_attach_controller" 00:10:12.354 } 00:10:12.354 EOF 00:10:12.354 )") 00:10:12.354 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:12.611 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:12.611 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:12.611 14:49:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.611 "params": { 00:10:12.611 "name": "Nvme0", 00:10:12.611 "trtype": "tcp", 00:10:12.611 "traddr": "10.0.0.2", 00:10:12.611 "adrfam": "ipv4", 00:10:12.611 "trsvcid": "4420", 00:10:12.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:12.611 "hdgst": false, 00:10:12.611 "ddgst": false 00:10:12.611 }, 00:10:12.611 "method": "bdev_nvme_attach_controller" 00:10:12.611 }' 00:10:12.611 [2024-07-12 14:49:51.057487] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:12.611 [2024-07-12 14:49:51.057589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72780 ] 00:10:12.611 [2024-07-12 14:49:51.197145] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.611 [2024-07-12 14:49:51.256239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.869 Running I/O for 1 seconds... 00:10:13.802 00:10:13.802 Latency(us) 00:10:13.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.802 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:13.802 Verification LBA range: start 0x0 length 0x400 00:10:13.802 Nvme0n1 : 1.04 1478.30 92.39 0.00 0.00 42325.19 5391.83 42896.29 00:10:13.803 =================================================================================================================== 00:10:13.803 Total : 1478.30 92.39 0.00 0.00 42325.19 5391.83 42896.29 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.062 rmmod nvme_tcp 00:10:14.062 rmmod nvme_fabrics 00:10:14.062 rmmod nvme_keyring 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72658 ']' 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72658 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72658 ']' 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72658 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.062 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72658 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:14.406 killing process with pid 72658 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72658' 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72658 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72658 00:10:14.406 [2024-07-12 14:49:52.871828] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:14.406 00:10:14.406 real 0m5.687s 00:10:14.406 user 0m22.348s 00:10:14.406 sys 0m1.197s 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.406 14:49:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.406 ************************************ 00:10:14.406 END TEST nvmf_host_management 00:10:14.406 ************************************ 00:10:14.406 14:49:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:14.406 14:49:52 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:14.406 14:49:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:14.406 14:49:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.406 14:49:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.710 ************************************ 00:10:14.710 START TEST nvmf_lvol 00:10:14.710 ************************************ 00:10:14.710 14:49:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:14.710 * Looking for test storage... 00:10:14.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:14.710 Cannot find device "nvmf_tgt_br" 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.710 Cannot find device "nvmf_tgt_br2" 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:14.710 Cannot find device "nvmf_tgt_br" 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:14.710 Cannot find device "nvmf_tgt_br2" 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.710 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:14.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:14.968 00:10:14.968 --- 10.0.0.2 ping statistics --- 00:10:14.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.968 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:14.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:10:14.968 00:10:14.968 --- 10.0.0.3 ping statistics --- 00:10:14.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.968 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:14.968 00:10:14.968 --- 10.0.0.1 ping statistics --- 00:10:14.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.968 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73000 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73000 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73000 ']' 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.968 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.969 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.969 14:49:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:14.969 [2024-07-12 14:49:53.519579] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:14.969 [2024-07-12 14:49:53.519670] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.226 [2024-07-12 14:49:53.659806] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.227 [2024-07-12 14:49:53.728810] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.227 [2024-07-12 14:49:53.728865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.227 [2024-07-12 14:49:53.728879] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.227 [2024-07-12 14:49:53.728889] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.227 [2024-07-12 14:49:53.728897] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.227 [2024-07-12 14:49:53.729031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.227 [2024-07-12 14:49:53.729127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.227 [2024-07-12 14:49:53.729622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:16.159 [2024-07-12 14:49:54.763435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.159 14:49:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.417 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:16.417 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.675 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:16.675 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:16.933 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:17.191 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=65f748bc-ecf2-4f65-8d82-7c788b2bd9b3 00:10:17.191 14:49:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 65f748bc-ecf2-4f65-8d82-7c788b2bd9b3 lvol 20 00:10:17.758 14:49:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d4fa96ee-6cd8-4b49-8419-8ee78785d73f 00:10:17.758 14:49:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:17.758 14:49:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4fa96ee-6cd8-4b49-8419-8ee78785d73f 00:10:18.016 14:49:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:18.274 [2024-07-12 14:49:56.842244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.274 14:49:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.533 14:49:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73147 00:10:18.533 14:49:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:18.533 14:49:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:19.910 14:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d4fa96ee-6cd8-4b49-8419-8ee78785d73f MY_SNAPSHOT 00:10:19.910 14:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=54c5454a-efe1-4546-a461-c382f176aa6e 00:10:19.910 14:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d4fa96ee-6cd8-4b49-8419-8ee78785d73f 30 00:10:20.474 14:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 54c5454a-efe1-4546-a461-c382f176aa6e MY_CLONE 00:10:20.732 14:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df1f3535-bb0d-4e40-9fe1-5096dfeb2729 00:10:20.732 14:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate df1f3535-bb0d-4e40-9fe1-5096dfeb2729 00:10:21.298 14:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73147 00:10:29.408 Initializing NVMe Controllers 00:10:29.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:29.408 Controller IO queue size 128, less than required. 00:10:29.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:29.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:29.408 Initialization complete. Launching workers. 00:10:29.408 ======================================================== 00:10:29.408 Latency(us) 00:10:29.408 Device Information : IOPS MiB/s Average min max 00:10:29.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10256.79 40.07 12479.77 1755.98 60143.28 00:10:29.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10360.49 40.47 12359.05 3770.41 67812.03 00:10:29.408 ======================================================== 00:10:29.408 Total : 20617.28 80.54 12419.10 1755.98 67812.03 00:10:29.408 00:10:29.409 14:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.409 14:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d4fa96ee-6cd8-4b49-8419-8ee78785d73f 00:10:29.409 14:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65f748bc-ecf2-4f65-8d82-7c788b2bd9b3 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.667 rmmod nvme_tcp 00:10:29.667 rmmod nvme_fabrics 00:10:29.667 rmmod nvme_keyring 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73000 ']' 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73000 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73000 ']' 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73000 00:10:29.667 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73000 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73000' 00:10:29.924 killing process with pid 73000 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73000 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73000 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.924 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:30.182 00:10:30.182 real 0m15.609s 00:10:30.182 user 1m5.475s 00:10:30.182 sys 0m3.847s 00:10:30.182 ************************************ 00:10:30.182 END TEST nvmf_lvol 00:10:30.182 ************************************ 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 14:50:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:30.182 14:50:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:30.182 14:50:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.182 14:50:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.182 14:50:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.182 ************************************ 00:10:30.182 START TEST nvmf_lvs_grow 00:10:30.182 ************************************ 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:30.182 * Looking for test storage... 00:10:30.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:30.182 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.183 Cannot find device "nvmf_tgt_br" 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.183 Cannot find device "nvmf_tgt_br2" 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:30.183 Cannot find device "nvmf_tgt_br" 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:30.183 Cannot find device "nvmf_tgt_br2" 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:30.183 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:30.441 14:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:30.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:30.441 00:10:30.441 --- 10.0.0.2 ping statistics --- 00:10:30.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.441 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:30.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:10:30.441 00:10:30.441 --- 10.0.0.3 ping statistics --- 00:10:30.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.441 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:30.441 00:10:30.441 --- 10.0.0.1 ping statistics --- 00:10:30.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.441 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.441 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73504 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73504 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73504 ']' 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.699 14:50:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.699 [2024-07-12 14:50:09.175096] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:30.699 [2024-07-12 14:50:09.175216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.699 [2024-07-12 14:50:09.309254] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.956 [2024-07-12 14:50:09.369722] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.956 [2024-07-12 14:50:09.369791] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.956 [2024-07-12 14:50:09.369805] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.956 [2024-07-12 14:50:09.369814] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.956 [2024-07-12 14:50:09.369823] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.956 [2024-07-12 14:50:09.369848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.918 14:50:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:32.176 [2024-07-12 14:50:10.581495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.176 ************************************ 00:10:32.176 START TEST lvs_grow_clean 00:10:32.176 ************************************ 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:32.176 14:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:32.433 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:32.433 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:32.691 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:32.691 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:32.691 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:33.255 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:33.255 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:33.255 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d lvol 150 00:10:33.513 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9181df02-abd5-4591-afc9-5e9405ce4ff1 00:10:33.513 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:33.513 14:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:33.770 [2024-07-12 14:50:12.182732] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:33.770 [2024-07-12 14:50:12.182816] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:33.770 true 00:10:33.770 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:33.770 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:34.028 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:34.028 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:34.287 14:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9181df02-abd5-4591-afc9-5e9405ce4ff1 00:10:34.548 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:34.806 [2024-07-12 14:50:13.279388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.806 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73677 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73677 /var/tmp/bdevperf.sock 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73677 ']' 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.066 14:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:35.066 [2024-07-12 14:50:13.587248] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:35.066 [2024-07-12 14:50:13.587347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73677 ] 00:10:35.324 [2024-07-12 14:50:13.725413] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.324 [2024-07-12 14:50:13.795002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.256 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.256 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:36.256 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:36.256 Nvme0n1 00:10:36.256 14:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:36.513 [ 00:10:36.513 { 00:10:36.513 "aliases": [ 00:10:36.513 "9181df02-abd5-4591-afc9-5e9405ce4ff1" 00:10:36.513 ], 00:10:36.513 "assigned_rate_limits": { 00:10:36.513 "r_mbytes_per_sec": 0, 00:10:36.513 "rw_ios_per_sec": 0, 00:10:36.513 "rw_mbytes_per_sec": 0, 00:10:36.513 "w_mbytes_per_sec": 0 00:10:36.513 }, 00:10:36.513 "block_size": 4096, 00:10:36.513 "claimed": false, 00:10:36.513 "driver_specific": { 00:10:36.513 "mp_policy": "active_passive", 00:10:36.513 "nvme": [ 00:10:36.513 { 00:10:36.513 "ctrlr_data": { 00:10:36.513 "ana_reporting": false, 00:10:36.513 "cntlid": 1, 00:10:36.513 "firmware_revision": "24.09", 00:10:36.513 "model_number": "SPDK bdev Controller", 00:10:36.513 "multi_ctrlr": true, 00:10:36.513 "oacs": { 00:10:36.513 "firmware": 0, 00:10:36.513 "format": 0, 00:10:36.513 "ns_manage": 0, 00:10:36.513 "security": 0 00:10:36.513 }, 00:10:36.513 "serial_number": "SPDK0", 00:10:36.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:36.513 "vendor_id": "0x8086" 00:10:36.513 }, 00:10:36.513 "ns_data": { 00:10:36.513 "can_share": true, 00:10:36.513 "id": 1 00:10:36.513 }, 00:10:36.513 "trid": { 00:10:36.513 "adrfam": "IPv4", 00:10:36.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:36.513 "traddr": "10.0.0.2", 00:10:36.514 "trsvcid": "4420", 00:10:36.514 "trtype": "TCP" 00:10:36.514 }, 00:10:36.514 "vs": { 00:10:36.514 "nvme_version": "1.3" 00:10:36.514 } 00:10:36.514 } 00:10:36.514 ] 00:10:36.514 }, 00:10:36.514 "memory_domains": [ 00:10:36.514 { 00:10:36.514 "dma_device_id": "system", 00:10:36.514 "dma_device_type": 1 00:10:36.514 } 00:10:36.514 ], 00:10:36.514 "name": "Nvme0n1", 00:10:36.514 "num_blocks": 38912, 00:10:36.514 "product_name": "NVMe disk", 00:10:36.514 "supported_io_types": { 00:10:36.514 "abort": true, 00:10:36.514 "compare": true, 00:10:36.514 "compare_and_write": true, 00:10:36.514 "copy": true, 00:10:36.514 "flush": true, 00:10:36.514 "get_zone_info": false, 00:10:36.514 "nvme_admin": true, 00:10:36.514 "nvme_io": true, 00:10:36.514 "nvme_io_md": false, 00:10:36.514 "nvme_iov_md": false, 00:10:36.514 "read": true, 00:10:36.514 "reset": true, 00:10:36.514 "seek_data": false, 00:10:36.514 "seek_hole": false, 00:10:36.514 "unmap": true, 00:10:36.514 "write": true, 00:10:36.514 "write_zeroes": true, 00:10:36.514 "zcopy": false, 00:10:36.514 "zone_append": false, 00:10:36.514 "zone_management": false 00:10:36.514 }, 00:10:36.514 "uuid": "9181df02-abd5-4591-afc9-5e9405ce4ff1", 00:10:36.514 "zoned": false 00:10:36.514 } 00:10:36.514 ] 00:10:36.514 14:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:36.514 14:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73719 00:10:36.514 14:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:36.772 Running I/O for 10 seconds... 00:10:37.706 Latency(us) 00:10:37.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.706 Nvme0n1 : 1.00 7877.00 30.77 0.00 0.00 0.00 0.00 0.00 00:10:37.706 =================================================================================================================== 00:10:37.706 Total : 7877.00 30.77 0.00 0.00 0.00 0.00 0.00 00:10:37.706 00:10:38.640 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:38.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.640 Nvme0n1 : 2.00 7820.50 30.55 0.00 0.00 0.00 0.00 0.00 00:10:38.640 =================================================================================================================== 00:10:38.640 Total : 7820.50 30.55 0.00 0.00 0.00 0.00 0.00 00:10:38.640 00:10:38.899 true 00:10:38.899 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:38.899 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:39.168 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:39.168 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:39.168 14:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73719 00:10:39.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.749 Nvme0n1 : 3.00 7886.33 30.81 0.00 0.00 0.00 0.00 0.00 00:10:39.749 =================================================================================================================== 00:10:39.749 Total : 7886.33 30.81 0.00 0.00 0.00 0.00 0.00 00:10:39.749 00:10:40.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.703 Nvme0n1 : 4.00 7869.50 30.74 0.00 0.00 0.00 0.00 0.00 00:10:40.703 =================================================================================================================== 00:10:40.703 Total : 7869.50 30.74 0.00 0.00 0.00 0.00 0.00 00:10:40.703 00:10:41.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.638 Nvme0n1 : 5.00 7810.20 30.51 0.00 0.00 0.00 0.00 0.00 00:10:41.638 =================================================================================================================== 00:10:41.638 Total : 7810.20 30.51 0.00 0.00 0.00 0.00 0.00 00:10:41.638 00:10:42.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.572 Nvme0n1 : 6.00 7818.17 30.54 0.00 0.00 0.00 0.00 0.00 00:10:42.572 =================================================================================================================== 00:10:42.572 Total : 7818.17 30.54 0.00 0.00 0.00 0.00 0.00 00:10:42.572 00:10:43.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.946 Nvme0n1 : 7.00 7833.29 30.60 0.00 0.00 0.00 0.00 0.00 00:10:43.946 =================================================================================================================== 00:10:43.946 Total : 7833.29 30.60 0.00 0.00 0.00 0.00 0.00 00:10:43.946 00:10:44.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.881 Nvme0n1 : 8.00 7835.00 30.61 0.00 0.00 0.00 0.00 0.00 00:10:44.881 =================================================================================================================== 00:10:44.881 Total : 7835.00 30.61 0.00 0.00 0.00 0.00 0.00 00:10:44.881 00:10:45.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.817 Nvme0n1 : 9.00 7845.22 30.65 0.00 0.00 0.00 0.00 0.00 00:10:45.817 =================================================================================================================== 00:10:45.817 Total : 7845.22 30.65 0.00 0.00 0.00 0.00 0.00 00:10:45.817 00:10:46.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.752 Nvme0n1 : 10.00 7847.60 30.65 0.00 0.00 0.00 0.00 0.00 00:10:46.752 =================================================================================================================== 00:10:46.752 Total : 7847.60 30.65 0.00 0.00 0.00 0.00 0.00 00:10:46.752 00:10:46.752 00:10:46.752 Latency(us) 00:10:46.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.752 Nvme0n1 : 10.01 7848.73 30.66 0.00 0.00 16303.14 7745.16 42181.35 00:10:46.752 =================================================================================================================== 00:10:46.752 Total : 7848.73 30.66 0.00 0.00 16303.14 7745.16 42181.35 00:10:46.752 0 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73677 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73677 ']' 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73677 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73677 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:46.752 killing process with pid 73677 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73677' 00:10:46.752 Received shutdown signal, test time was about 10.000000 seconds 00:10:46.752 00:10:46.752 Latency(us) 00:10:46.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.752 =================================================================================================================== 00:10:46.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73677 00:10:46.752 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73677 00:10:47.011 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:47.269 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:47.528 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:47.528 14:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:47.786 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:47.786 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:47.786 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:48.044 [2024-07-12 14:50:26.476207] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:48.044 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:48.303 2024/07/12 14:50:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b9ce9abf-2b22-4fa5-84db-6e000b92f95d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:48.303 request: 00:10:48.303 { 00:10:48.303 "method": "bdev_lvol_get_lvstores", 00:10:48.303 "params": { 00:10:48.303 "uuid": "b9ce9abf-2b22-4fa5-84db-6e000b92f95d" 00:10:48.303 } 00:10:48.303 } 00:10:48.303 Got JSON-RPC error response 00:10:48.303 GoRPCClient: error on JSON-RPC call 00:10:48.303 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:48.303 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.303 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.303 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.303 14:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.561 aio_bdev 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9181df02-abd5-4591-afc9-5e9405ce4ff1 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=9181df02-abd5-4591-afc9-5e9405ce4ff1 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:48.561 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:48.820 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9181df02-abd5-4591-afc9-5e9405ce4ff1 -t 2000 00:10:49.078 [ 00:10:49.078 { 00:10:49.079 "aliases": [ 00:10:49.079 "lvs/lvol" 00:10:49.079 ], 00:10:49.079 "assigned_rate_limits": { 00:10:49.079 "r_mbytes_per_sec": 0, 00:10:49.079 "rw_ios_per_sec": 0, 00:10:49.079 "rw_mbytes_per_sec": 0, 00:10:49.079 "w_mbytes_per_sec": 0 00:10:49.079 }, 00:10:49.079 "block_size": 4096, 00:10:49.079 "claimed": false, 00:10:49.079 "driver_specific": { 00:10:49.079 "lvol": { 00:10:49.079 "base_bdev": "aio_bdev", 00:10:49.079 "clone": false, 00:10:49.079 "esnap_clone": false, 00:10:49.079 "lvol_store_uuid": "b9ce9abf-2b22-4fa5-84db-6e000b92f95d", 00:10:49.079 "num_allocated_clusters": 38, 00:10:49.079 "snapshot": false, 00:10:49.079 "thin_provision": false 00:10:49.079 } 00:10:49.079 }, 00:10:49.079 "name": "9181df02-abd5-4591-afc9-5e9405ce4ff1", 00:10:49.079 "num_blocks": 38912, 00:10:49.079 "product_name": "Logical Volume", 00:10:49.079 "supported_io_types": { 00:10:49.079 "abort": false, 00:10:49.079 "compare": false, 00:10:49.079 "compare_and_write": false, 00:10:49.079 "copy": false, 00:10:49.079 "flush": false, 00:10:49.079 "get_zone_info": false, 00:10:49.079 "nvme_admin": false, 00:10:49.079 "nvme_io": false, 00:10:49.079 "nvme_io_md": false, 00:10:49.079 "nvme_iov_md": false, 00:10:49.079 "read": true, 00:10:49.079 "reset": true, 00:10:49.079 "seek_data": true, 00:10:49.079 "seek_hole": true, 00:10:49.079 "unmap": true, 00:10:49.079 "write": true, 00:10:49.079 "write_zeroes": true, 00:10:49.079 "zcopy": false, 00:10:49.079 "zone_append": false, 00:10:49.079 "zone_management": false 00:10:49.079 }, 00:10:49.079 "uuid": "9181df02-abd5-4591-afc9-5e9405ce4ff1", 00:10:49.079 "zoned": false 00:10:49.079 } 00:10:49.079 ] 00:10:49.079 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:49.079 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:49.079 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:49.646 14:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:49.646 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:49.646 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:49.646 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:49.646 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9181df02-abd5-4591-afc9-5e9405ce4ff1 00:10:49.904 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9ce9abf-2b22-4fa5-84db-6e000b92f95d 00:10:50.470 14:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:50.730 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:50.988 00:10:50.988 real 0m18.969s 00:10:50.988 user 0m18.192s 00:10:50.988 sys 0m2.233s 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.988 ************************************ 00:10:50.988 END TEST lvs_grow_clean 00:10:50.988 ************************************ 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 ************************************ 00:10:50.988 START TEST lvs_grow_dirty 00:10:50.988 ************************************ 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:50.988 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:51.246 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:51.504 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:51.504 14:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:51.762 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:10:51.762 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:10:51.762 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:52.021 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:52.021 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:52.021 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 lvol 150 00:10:52.279 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:10:52.279 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:52.279 14:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:52.537 [2024-07-12 14:50:31.089406] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:52.537 [2024-07-12 14:50:31.089560] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:52.537 true 00:10:52.537 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:52.537 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:10:52.796 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:52.796 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:53.054 14:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:10:53.626 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:53.626 [2024-07-12 14:50:32.242086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.626 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74125 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74125 /var/tmp/bdevperf.sock 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74125 ']' 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.194 14:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:54.194 [2024-07-12 14:50:32.630356] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:10:54.194 [2024-07-12 14:50:32.630447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74125 ] 00:10:54.194 [2024-07-12 14:50:32.760952] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.194 [2024-07-12 14:50:32.821781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.127 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.127 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:55.127 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:55.386 Nvme0n1 00:10:55.386 14:50:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:55.645 [ 00:10:55.645 { 00:10:55.645 "aliases": [ 00:10:55.645 "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c" 00:10:55.645 ], 00:10:55.645 "assigned_rate_limits": { 00:10:55.645 "r_mbytes_per_sec": 0, 00:10:55.645 "rw_ios_per_sec": 0, 00:10:55.645 "rw_mbytes_per_sec": 0, 00:10:55.645 "w_mbytes_per_sec": 0 00:10:55.645 }, 00:10:55.645 "block_size": 4096, 00:10:55.645 "claimed": false, 00:10:55.645 "driver_specific": { 00:10:55.645 "mp_policy": "active_passive", 00:10:55.645 "nvme": [ 00:10:55.645 { 00:10:55.645 "ctrlr_data": { 00:10:55.645 "ana_reporting": false, 00:10:55.645 "cntlid": 1, 00:10:55.645 "firmware_revision": "24.09", 00:10:55.645 "model_number": "SPDK bdev Controller", 00:10:55.645 "multi_ctrlr": true, 00:10:55.645 "oacs": { 00:10:55.645 "firmware": 0, 00:10:55.645 "format": 0, 00:10:55.645 "ns_manage": 0, 00:10:55.645 "security": 0 00:10:55.645 }, 00:10:55.645 "serial_number": "SPDK0", 00:10:55.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.645 "vendor_id": "0x8086" 00:10:55.645 }, 00:10:55.645 "ns_data": { 00:10:55.645 "can_share": true, 00:10:55.645 "id": 1 00:10:55.645 }, 00:10:55.645 "trid": { 00:10:55.645 "adrfam": "IPv4", 00:10:55.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.645 "traddr": "10.0.0.2", 00:10:55.645 "trsvcid": "4420", 00:10:55.645 "trtype": "TCP" 00:10:55.645 }, 00:10:55.645 "vs": { 00:10:55.645 "nvme_version": "1.3" 00:10:55.645 } 00:10:55.645 } 00:10:55.645 ] 00:10:55.645 }, 00:10:55.645 "memory_domains": [ 00:10:55.645 { 00:10:55.645 "dma_device_id": "system", 00:10:55.645 "dma_device_type": 1 00:10:55.645 } 00:10:55.645 ], 00:10:55.645 "name": "Nvme0n1", 00:10:55.645 "num_blocks": 38912, 00:10:55.645 "product_name": "NVMe disk", 00:10:55.645 "supported_io_types": { 00:10:55.645 "abort": true, 00:10:55.645 "compare": true, 00:10:55.645 "compare_and_write": true, 00:10:55.645 "copy": true, 00:10:55.645 "flush": true, 00:10:55.645 "get_zone_info": false, 00:10:55.645 "nvme_admin": true, 00:10:55.645 "nvme_io": true, 00:10:55.645 "nvme_io_md": false, 00:10:55.645 "nvme_iov_md": false, 00:10:55.645 "read": true, 00:10:55.645 "reset": true, 00:10:55.645 "seek_data": false, 00:10:55.645 "seek_hole": false, 00:10:55.645 "unmap": true, 00:10:55.645 "write": true, 00:10:55.645 "write_zeroes": true, 00:10:55.645 "zcopy": false, 00:10:55.645 "zone_append": false, 00:10:55.645 "zone_management": false 00:10:55.645 }, 00:10:55.645 "uuid": "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c", 00:10:55.645 "zoned": false 00:10:55.645 } 00:10:55.645 ] 00:10:55.645 14:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74178 00:10:55.645 14:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:55.645 14:50:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:55.904 Running I/O for 10 seconds... 00:10:56.839 Latency(us) 00:10:56.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.839 Nvme0n1 : 1.00 8045.00 31.43 0.00 0.00 0.00 0.00 0.00 00:10:56.839 =================================================================================================================== 00:10:56.839 Total : 8045.00 31.43 0.00 0.00 0.00 0.00 0.00 00:10:56.839 00:10:57.775 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:10:57.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.775 Nvme0n1 : 2.00 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:10:57.775 =================================================================================================================== 00:10:57.775 Total : 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:10:57.775 00:10:58.034 true 00:10:58.034 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:10:58.034 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:58.292 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:58.292 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:58.292 14:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74178 00:10:58.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.858 Nvme0n1 : 3.00 8006.00 31.27 0.00 0.00 0.00 0.00 0.00 00:10:58.858 =================================================================================================================== 00:10:58.858 Total : 8006.00 31.27 0.00 0.00 0.00 0.00 0.00 00:10:58.858 00:10:59.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.792 Nvme0n1 : 4.00 7762.75 30.32 0.00 0.00 0.00 0.00 0.00 00:10:59.792 =================================================================================================================== 00:10:59.792 Total : 7762.75 30.32 0.00 0.00 0.00 0.00 0.00 00:10:59.792 00:11:00.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.741 Nvme0n1 : 5.00 7335.00 28.65 0.00 0.00 0.00 0.00 0.00 00:11:00.741 =================================================================================================================== 00:11:00.741 Total : 7335.00 28.65 0.00 0.00 0.00 0.00 0.00 00:11:00.741 00:11:01.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.705 Nvme0n1 : 6.00 7417.00 28.97 0.00 0.00 0.00 0.00 0.00 00:11:01.705 =================================================================================================================== 00:11:01.705 Total : 7417.00 28.97 0.00 0.00 0.00 0.00 0.00 00:11:01.705 00:11:03.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.079 Nvme0n1 : 7.00 7453.14 29.11 0.00 0.00 0.00 0.00 0.00 00:11:03.079 =================================================================================================================== 00:11:03.079 Total : 7453.14 29.11 0.00 0.00 0.00 0.00 0.00 00:11:03.079 00:11:04.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.015 Nvme0n1 : 8.00 7496.75 29.28 0.00 0.00 0.00 0.00 0.00 00:11:04.015 =================================================================================================================== 00:11:04.015 Total : 7496.75 29.28 0.00 0.00 0.00 0.00 0.00 00:11:04.015 00:11:04.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.949 Nvme0n1 : 9.00 7523.00 29.39 0.00 0.00 0.00 0.00 0.00 00:11:04.949 =================================================================================================================== 00:11:04.949 Total : 7523.00 29.39 0.00 0.00 0.00 0.00 0.00 00:11:04.949 00:11:05.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.885 Nvme0n1 : 10.00 7564.60 29.55 0.00 0.00 0.00 0.00 0.00 00:11:05.885 =================================================================================================================== 00:11:05.885 Total : 7564.60 29.55 0.00 0.00 0.00 0.00 0.00 00:11:05.885 00:11:05.885 00:11:05.885 Latency(us) 00:11:05.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.885 Nvme0n1 : 10.01 7567.23 29.56 0.00 0.00 16910.11 6404.65 318385.80 00:11:05.885 =================================================================================================================== 00:11:05.885 Total : 7567.23 29.56 0.00 0.00 16910.11 6404.65 318385.80 00:11:05.885 0 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74125 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74125 ']' 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74125 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74125 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.885 killing process with pid 74125 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74125' 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74125 00:11:05.885 Received shutdown signal, test time was about 10.000000 seconds 00:11:05.885 00:11:05.885 Latency(us) 00:11:05.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.885 =================================================================================================================== 00:11:05.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:05.885 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74125 00:11:06.144 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:06.404 14:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:06.662 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:06.662 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73504 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73504 00:11:06.963 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73504 Killed "${NVMF_APP[@]}" "$@" 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:06.963 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74341 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74341 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74341 ']' 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.964 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:06.964 [2024-07-12 14:50:45.540600] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:06.964 [2024-07-12 14:50:45.540698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.225 [2024-07-12 14:50:45.677627] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.225 [2024-07-12 14:50:45.767539] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.225 [2024-07-12 14:50:45.767631] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.225 [2024-07-12 14:50:45.767656] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.225 [2024-07-12 14:50:45.767673] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.225 [2024-07-12 14:50:45.767687] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.225 [2024-07-12 14:50:45.767725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.225 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.225 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:07.225 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.225 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:07.225 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:07.487 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.487 14:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:07.750 [2024-07-12 14:50:46.203588] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:07.750 [2024-07-12 14:50:46.204018] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:07.750 [2024-07-12 14:50:46.204166] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:07.750 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:07.750 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:11:07.750 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:11:07.750 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:07.751 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:07.751 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:07.751 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:07.751 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:08.011 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c -t 2000 00:11:08.269 [ 00:11:08.269 { 00:11:08.269 "aliases": [ 00:11:08.269 "lvs/lvol" 00:11:08.269 ], 00:11:08.269 "assigned_rate_limits": { 00:11:08.269 "r_mbytes_per_sec": 0, 00:11:08.269 "rw_ios_per_sec": 0, 00:11:08.269 "rw_mbytes_per_sec": 0, 00:11:08.269 "w_mbytes_per_sec": 0 00:11:08.269 }, 00:11:08.269 "block_size": 4096, 00:11:08.269 "claimed": false, 00:11:08.269 "driver_specific": { 00:11:08.269 "lvol": { 00:11:08.269 "base_bdev": "aio_bdev", 00:11:08.269 "clone": false, 00:11:08.269 "esnap_clone": false, 00:11:08.269 "lvol_store_uuid": "ba06675a-d5a6-4e0b-8516-2b7e2da1a231", 00:11:08.269 "num_allocated_clusters": 38, 00:11:08.270 "snapshot": false, 00:11:08.270 "thin_provision": false 00:11:08.270 } 00:11:08.270 }, 00:11:08.270 "name": "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c", 00:11:08.270 "num_blocks": 38912, 00:11:08.270 "product_name": "Logical Volume", 00:11:08.270 "supported_io_types": { 00:11:08.270 "abort": false, 00:11:08.270 "compare": false, 00:11:08.270 "compare_and_write": false, 00:11:08.270 "copy": false, 00:11:08.270 "flush": false, 00:11:08.270 "get_zone_info": false, 00:11:08.270 "nvme_admin": false, 00:11:08.270 "nvme_io": false, 00:11:08.270 "nvme_io_md": false, 00:11:08.270 "nvme_iov_md": false, 00:11:08.270 "read": true, 00:11:08.270 "reset": true, 00:11:08.270 "seek_data": true, 00:11:08.270 "seek_hole": true, 00:11:08.270 "unmap": true, 00:11:08.270 "write": true, 00:11:08.270 "write_zeroes": true, 00:11:08.270 "zcopy": false, 00:11:08.270 "zone_append": false, 00:11:08.270 "zone_management": false 00:11:08.270 }, 00:11:08.270 "uuid": "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c", 00:11:08.270 "zoned": false 00:11:08.270 } 00:11:08.270 ] 00:11:08.270 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:08.270 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:08.270 14:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:08.528 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:08.528 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:08.528 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:08.787 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:08.787 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:09.046 [2024-07-12 14:50:47.685392] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:09.305 14:50:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:09.564 2024/07/12 14:50:48 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ba06675a-d5a6-4e0b-8516-2b7e2da1a231], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:09.564 request: 00:11:09.564 { 00:11:09.564 "method": "bdev_lvol_get_lvstores", 00:11:09.564 "params": { 00:11:09.564 "uuid": "ba06675a-d5a6-4e0b-8516-2b7e2da1a231" 00:11:09.564 } 00:11:09.564 } 00:11:09.564 Got JSON-RPC error response 00:11:09.564 GoRPCClient: error on JSON-RPC call 00:11:09.564 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:09.564 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.564 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.564 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.564 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:09.822 aio_bdev 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:09.822 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:10.081 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c -t 2000 00:11:10.339 [ 00:11:10.339 { 00:11:10.339 "aliases": [ 00:11:10.339 "lvs/lvol" 00:11:10.339 ], 00:11:10.339 "assigned_rate_limits": { 00:11:10.339 "r_mbytes_per_sec": 0, 00:11:10.339 "rw_ios_per_sec": 0, 00:11:10.339 "rw_mbytes_per_sec": 0, 00:11:10.339 "w_mbytes_per_sec": 0 00:11:10.339 }, 00:11:10.339 "block_size": 4096, 00:11:10.339 "claimed": false, 00:11:10.339 "driver_specific": { 00:11:10.339 "lvol": { 00:11:10.339 "base_bdev": "aio_bdev", 00:11:10.339 "clone": false, 00:11:10.339 "esnap_clone": false, 00:11:10.339 "lvol_store_uuid": "ba06675a-d5a6-4e0b-8516-2b7e2da1a231", 00:11:10.339 "num_allocated_clusters": 38, 00:11:10.339 "snapshot": false, 00:11:10.339 "thin_provision": false 00:11:10.339 } 00:11:10.339 }, 00:11:10.339 "name": "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c", 00:11:10.339 "num_blocks": 38912, 00:11:10.339 "product_name": "Logical Volume", 00:11:10.339 "supported_io_types": { 00:11:10.339 "abort": false, 00:11:10.339 "compare": false, 00:11:10.339 "compare_and_write": false, 00:11:10.339 "copy": false, 00:11:10.339 "flush": false, 00:11:10.339 "get_zone_info": false, 00:11:10.339 "nvme_admin": false, 00:11:10.339 "nvme_io": false, 00:11:10.339 "nvme_io_md": false, 00:11:10.339 "nvme_iov_md": false, 00:11:10.339 "read": true, 00:11:10.339 "reset": true, 00:11:10.339 "seek_data": true, 00:11:10.339 "seek_hole": true, 00:11:10.339 "unmap": true, 00:11:10.339 "write": true, 00:11:10.339 "write_zeroes": true, 00:11:10.339 "zcopy": false, 00:11:10.339 "zone_append": false, 00:11:10.339 "zone_management": false 00:11:10.339 }, 00:11:10.339 "uuid": "aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c", 00:11:10.339 "zoned": false 00:11:10.339 } 00:11:10.339 ] 00:11:10.339 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:10.339 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:10.339 14:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:10.598 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:10.598 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:10.598 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:10.856 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:10.857 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aa6a60a3-fa9b-4fb9-82bf-2d7e93c48c5c 00:11:11.423 14:50:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba06675a-d5a6-4e0b-8516-2b7e2da1a231 00:11:11.423 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:11.990 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:12.249 00:11:12.249 real 0m21.111s 00:11:12.249 user 0m45.440s 00:11:12.249 sys 0m7.852s 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:12.249 ************************************ 00:11:12.249 END TEST lvs_grow_dirty 00:11:12.249 ************************************ 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:12.249 nvmf_trace.0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.249 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.249 rmmod nvme_tcp 00:11:12.508 rmmod nvme_fabrics 00:11:12.508 rmmod nvme_keyring 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74341 ']' 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74341 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74341 ']' 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74341 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74341 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74341' 00:11:12.508 killing process with pid 74341 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74341 00:11:12.508 14:50:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74341 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:12.767 00:11:12.767 real 0m42.552s 00:11:12.767 user 1m10.046s 00:11:12.767 sys 0m10.716s 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.767 14:50:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:12.767 ************************************ 00:11:12.767 END TEST nvmf_lvs_grow 00:11:12.767 ************************************ 00:11:12.767 14:50:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.767 14:50:51 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:12.767 14:50:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.767 14:50:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.767 14:50:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.767 ************************************ 00:11:12.767 START TEST nvmf_bdev_io_wait 00:11:12.767 ************************************ 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:12.767 * Looking for test storage... 00:11:12.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:12.767 Cannot find device "nvmf_tgt_br" 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.767 Cannot find device "nvmf_tgt_br2" 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:12.767 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:12.768 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:12.768 Cannot find device "nvmf_tgt_br" 00:11:12.768 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:12.768 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:13.026 Cannot find device "nvmf_tgt_br2" 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:13.026 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:13.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:13.285 00:11:13.285 --- 10.0.0.2 ping statistics --- 00:11:13.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.285 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:13.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:13.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:11:13.285 00:11:13.285 --- 10.0.0.3 ping statistics --- 00:11:13.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.285 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:13.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:11:13.285 00:11:13.285 --- 10.0.0.1 ping statistics --- 00:11:13.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.285 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74747 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74747 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74747 ']' 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.285 14:50:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.285 [2024-07-12 14:50:51.794207] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:13.285 [2024-07-12 14:50:51.794756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.285 [2024-07-12 14:50:51.932289] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.543 [2024-07-12 14:50:51.994377] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.543 [2024-07-12 14:50:51.994433] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.544 [2024-07-12 14:50:51.994460] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.544 [2024-07-12 14:50:51.994468] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.544 [2024-07-12 14:50:51.994475] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.544 [2024-07-12 14:50:51.994627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.544 [2024-07-12 14:50:51.994690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.544 [2024-07-12 14:50:51.995417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.544 [2024-07-12 14:50:51.995405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 [2024-07-12 14:50:52.161234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 Malloc0 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.544 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.803 [2024-07-12 14:50:52.204945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74787 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74789 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.803 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.803 { 00:11:13.803 "params": { 00:11:13.803 "name": "Nvme$subsystem", 00:11:13.803 "trtype": "$TEST_TRANSPORT", 00:11:13.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.803 "adrfam": "ipv4", 00:11:13.803 "trsvcid": "$NVMF_PORT", 00:11:13.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.803 "hdgst": ${hdgst:-false}, 00:11:13.803 "ddgst": ${ddgst:-false} 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 } 00:11:13.804 EOF 00:11:13.804 )") 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74791 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.804 { 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme$subsystem", 00:11:13.804 "trtype": "$TEST_TRANSPORT", 00:11:13.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "$NVMF_PORT", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.804 "hdgst": ${hdgst:-false}, 00:11:13.804 "ddgst": ${ddgst:-false} 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 } 00:11:13.804 EOF 00:11:13.804 )") 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74794 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.804 { 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme$subsystem", 00:11:13.804 "trtype": "$TEST_TRANSPORT", 00:11:13.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "$NVMF_PORT", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.804 "hdgst": ${hdgst:-false}, 00:11:13.804 "ddgst": ${ddgst:-false} 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 } 00:11:13.804 EOF 00:11:13.804 )") 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.804 { 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme$subsystem", 00:11:13.804 "trtype": "$TEST_TRANSPORT", 00:11:13.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "$NVMF_PORT", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.804 "hdgst": ${hdgst:-false}, 00:11:13.804 "ddgst": ${ddgst:-false} 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 } 00:11:13.804 EOF 00:11:13.804 )") 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme1", 00:11:13.804 "trtype": "tcp", 00:11:13.804 "traddr": "10.0.0.2", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "4420", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.804 "hdgst": false, 00:11:13.804 "ddgst": false 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 }' 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme1", 00:11:13.804 "trtype": "tcp", 00:11:13.804 "traddr": "10.0.0.2", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "4420", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.804 "hdgst": false, 00:11:13.804 "ddgst": false 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 }' 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme1", 00:11:13.804 "trtype": "tcp", 00:11:13.804 "traddr": "10.0.0.2", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "4420", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.804 "hdgst": false, 00:11:13.804 "ddgst": false 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 }' 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.804 "params": { 00:11:13.804 "name": "Nvme1", 00:11:13.804 "trtype": "tcp", 00:11:13.804 "traddr": "10.0.0.2", 00:11:13.804 "adrfam": "ipv4", 00:11:13.804 "trsvcid": "4420", 00:11:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.804 "hdgst": false, 00:11:13.804 "ddgst": false 00:11:13.804 }, 00:11:13.804 "method": "bdev_nvme_attach_controller" 00:11:13.804 }' 00:11:13.804 14:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74787 00:11:13.804 [2024-07-12 14:50:52.274611] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:13.804 [2024-07-12 14:50:52.274697] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:13.804 [2024-07-12 14:50:52.281026] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:13.804 [2024-07-12 14:50:52.281247] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:13.804 [2024-07-12 14:50:52.290999] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:13.804 [2024-07-12 14:50:52.291076] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:13.804 [2024-07-12 14:50:52.296210] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:13.804 [2024-07-12 14:50:52.296286] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:13.804 [2024-07-12 14:50:52.451811] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.063 [2024-07-12 14:50:52.496310] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.063 [2024-07-12 14:50:52.497858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.063 [2024-07-12 14:50:52.533752] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.063 [2024-07-12 14:50:52.550003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:14.063 [2024-07-12 14:50:52.585983] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.063 [2024-07-12 14:50:52.603938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:14.063 Running I/O for 1 seconds... 00:11:14.063 [2024-07-12 14:50:52.640067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:14.063 Running I/O for 1 seconds... 00:11:14.324 Running I/O for 1 seconds... 00:11:14.324 Running I/O for 1 seconds... 00:11:15.263 00:11:15.263 Latency(us) 00:11:15.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.263 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:15.263 Nvme1n1 : 1.02 5348.72 20.89 0.00 0.00 23656.88 8757.99 46470.98 00:11:15.263 =================================================================================================================== 00:11:15.263 Total : 5348.72 20.89 0.00 0.00 23656.88 8757.99 46470.98 00:11:15.263 00:11:15.263 Latency(us) 00:11:15.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.263 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:15.263 Nvme1n1 : 1.01 8181.16 31.96 0.00 0.00 15555.00 10724.07 28001.75 00:11:15.263 =================================================================================================================== 00:11:15.263 Total : 8181.16 31.96 0.00 0.00 15555.00 10724.07 28001.75 00:11:15.263 00:11:15.263 Latency(us) 00:11:15.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.263 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:15.263 Nvme1n1 : 1.00 171368.59 669.41 0.00 0.00 743.87 301.61 1720.32 00:11:15.263 =================================================================================================================== 00:11:15.263 Total : 171368.59 669.41 0.00 0.00 743.87 301.61 1720.32 00:11:15.263 00:11:15.263 Latency(us) 00:11:15.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.263 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:15.263 Nvme1n1 : 1.00 6056.34 23.66 0.00 0.00 21074.05 4885.41 54096.99 00:11:15.263 =================================================================================================================== 00:11:15.263 Total : 6056.34 23.66 0.00 0.00 21074.05 4885.41 54096.99 00:11:15.263 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74789 00:11:15.263 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74791 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74794 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:15.523 14:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.523 rmmod nvme_tcp 00:11:15.523 rmmod nvme_fabrics 00:11:15.523 rmmod nvme_keyring 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74747 ']' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74747 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74747 ']' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74747 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74747 00:11:15.523 killing process with pid 74747 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74747' 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74747 00:11:15.523 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74747 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:15.782 00:11:15.782 real 0m3.014s 00:11:15.782 user 0m13.516s 00:11:15.782 sys 0m1.630s 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.782 14:50:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.782 ************************************ 00:11:15.782 END TEST nvmf_bdev_io_wait 00:11:15.782 ************************************ 00:11:15.782 14:50:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:15.782 14:50:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:15.782 14:50:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:15.782 14:50:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.782 14:50:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.782 ************************************ 00:11:15.782 START TEST nvmf_queue_depth 00:11:15.782 ************************************ 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:15.782 * Looking for test storage... 00:11:15.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.782 14:50:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:15.783 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:16.043 Cannot find device "nvmf_tgt_br" 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.043 Cannot find device "nvmf_tgt_br2" 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:16.043 Cannot find device "nvmf_tgt_br" 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:16.043 Cannot find device "nvmf_tgt_br2" 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.043 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:16.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:11:16.302 00:11:16.302 --- 10.0.0.2 ping statistics --- 00:11:16.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.302 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:16.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:16.302 00:11:16.302 --- 10.0.0.3 ping statistics --- 00:11:16.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.302 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:11:16.302 00:11:16.302 --- 10.0.0.1 ping statistics --- 00:11:16.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.302 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74993 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74993 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74993 ']' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.302 14:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.302 [2024-07-12 14:50:54.790420] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:16.302 [2024-07-12 14:50:54.790512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.302 [2024-07-12 14:50:54.924990] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.560 [2024-07-12 14:50:54.988065] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.560 [2024-07-12 14:50:54.988118] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.560 [2024-07-12 14:50:54.988130] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.560 [2024-07-12 14:50:54.988138] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.560 [2024-07-12 14:50:54.988145] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.560 [2024-07-12 14:50:54.988169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.127 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.127 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:17.127 14:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:17.127 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.127 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 [2024-07-12 14:50:55.823483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 Malloc0 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 [2024-07-12 14:50:55.884697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75043 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75043 /var/tmp/bdevperf.sock 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75043 ']' 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:17.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.386 14:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.386 [2024-07-12 14:50:55.945692] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:17.386 [2024-07-12 14:50:55.945789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75043 ] 00:11:17.644 [2024-07-12 14:50:56.084115] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.644 [2024-07-12 14:50:56.142297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:18.579 NVMe0n1 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.579 14:50:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:18.879 Running I/O for 10 seconds... 00:11:28.863 00:11:28.863 Latency(us) 00:11:28.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.863 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:28.863 Verification LBA range: start 0x0 length 0x4000 00:11:28.863 NVMe0n1 : 10.08 8611.27 33.64 0.00 0.00 118353.12 28120.90 81502.95 00:11:28.863 =================================================================================================================== 00:11:28.863 Total : 8611.27 33.64 0.00 0.00 118353.12 28120.90 81502.95 00:11:28.863 0 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75043 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75043 ']' 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75043 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75043 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:28.863 killing process with pid 75043 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75043' 00:11:28.863 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75043 00:11:28.863 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.863 00:11:28.863 Latency(us) 00:11:28.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.864 =================================================================================================================== 00:11:28.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:28.864 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75043 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.123 rmmod nvme_tcp 00:11:29.123 rmmod nvme_fabrics 00:11:29.123 rmmod nvme_keyring 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74993 ']' 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74993 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74993 ']' 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74993 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74993 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:29.123 killing process with pid 74993 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74993' 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74993 00:11:29.123 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74993 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:29.381 00:11:29.381 real 0m13.615s 00:11:29.381 user 0m23.981s 00:11:29.381 sys 0m1.896s 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.381 14:51:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.381 ************************************ 00:11:29.381 END TEST nvmf_queue_depth 00:11:29.381 ************************************ 00:11:29.381 14:51:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.381 14:51:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:29.381 14:51:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.381 14:51:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.381 14:51:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.381 ************************************ 00:11:29.381 START TEST nvmf_target_multipath 00:11:29.381 ************************************ 00:11:29.382 14:51:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:29.640 * Looking for test storage... 00:11:29.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.640 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:29.641 Cannot find device "nvmf_tgt_br" 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.641 Cannot find device "nvmf_tgt_br2" 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:29.641 Cannot find device "nvmf_tgt_br" 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:29.641 Cannot find device "nvmf_tgt_br2" 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.641 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:29.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:29.900 00:11:29.900 --- 10.0.0.2 ping statistics --- 00:11:29.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.900 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:29.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:29.900 00:11:29.900 --- 10.0.0.3 ping statistics --- 00:11:29.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.900 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:29.900 00:11:29.900 --- 10.0.0.1 ping statistics --- 00:11:29.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.900 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75379 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75379 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75379 ']' 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.900 14:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:29.901 [2024-07-12 14:51:08.500597] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:29.901 [2024-07-12 14:51:08.501297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.159 [2024-07-12 14:51:08.640806] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.159 [2024-07-12 14:51:08.725429] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.159 [2024-07-12 14:51:08.725561] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.159 [2024-07-12 14:51:08.725583] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.159 [2024-07-12 14:51:08.725599] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.159 [2024-07-12 14:51:08.725614] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.159 [2024-07-12 14:51:08.726655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.159 [2024-07-12 14:51:08.726764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.159 [2024-07-12 14:51:08.726918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.159 [2024-07-12 14:51:08.726931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.091 14:51:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:31.347 [2024-07-12 14:51:09.748208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.347 14:51:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:31.604 Malloc0 00:11:31.604 14:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:31.861 14:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.119 14:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.376 [2024-07-12 14:51:10.886023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.376 14:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:32.634 [2024-07-12 14:51:11.162270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:32.634 14:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:32.892 14:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:33.151 14:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.151 14:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.151 14:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.151 14:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:33.151 14:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75524 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:35.051 14:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:35.051 [global] 00:11:35.051 thread=1 00:11:35.051 invalidate=1 00:11:35.051 rw=randrw 00:11:35.051 time_based=1 00:11:35.051 runtime=6 00:11:35.051 ioengine=libaio 00:11:35.051 direct=1 00:11:35.051 bs=4096 00:11:35.051 iodepth=128 00:11:35.051 norandommap=0 00:11:35.051 numjobs=1 00:11:35.051 00:11:35.051 verify_dump=1 00:11:35.051 verify_backlog=512 00:11:35.051 verify_state_save=0 00:11:35.051 do_verify=1 00:11:35.051 verify=crc32c-intel 00:11:35.051 [job0] 00:11:35.051 filename=/dev/nvme0n1 00:11:35.051 Could not set queue depth (nvme0n1) 00:11:35.308 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.308 fio-3.35 00:11:35.308 Starting 1 thread 00:11:36.240 14:51:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:36.498 14:51:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:36.757 14:51:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:37.781 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:37.781 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:37.781 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:37.781 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:38.039 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:38.298 14:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:39.232 14:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:39.232 14:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:39.232 14:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:39.232 14:51:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75524 00:11:41.760 00:11:41.760 job0: (groupid=0, jobs=1): err= 0: pid=75550: Fri Jul 12 14:51:19 2024 00:11:41.761 read: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(242MiB/6006msec) 00:11:41.761 slat (usec): min=3, max=9620, avg=54.92, stdev=247.54 00:11:41.761 clat (usec): min=575, max=25197, avg=8379.47, stdev=1564.28 00:11:41.761 lat (usec): min=595, max=25209, avg=8434.39, stdev=1574.77 00:11:41.761 clat percentiles (usec): 00:11:41.761 | 1.00th=[ 5014], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7373], 00:11:41.761 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8455], 00:11:41.761 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[11207], 00:11:41.761 | 99.00th=[13435], 99.50th=[14484], 99.90th=[19268], 99.95th=[21627], 00:11:41.761 | 99.99th=[25035] 00:11:41.761 bw ( KiB/s): min= 6176, max=28376, per=52.19%, avg=21565.00, stdev=6095.40, samples=11 00:11:41.761 iops : min= 1544, max= 7094, avg=5391.18, stdev=1523.85, samples=11 00:11:41.761 write: IOPS=6181, BW=24.1MiB/s (25.3MB/s)(130MiB/5401msec); 0 zone resets 00:11:41.761 slat (usec): min=13, max=2782, avg=67.54, stdev=173.99 00:11:41.761 clat (usec): min=478, max=23592, avg=7250.12, stdev=1413.05 00:11:41.761 lat (usec): min=512, max=23621, avg=7317.66, stdev=1420.93 00:11:41.761 clat percentiles (usec): 00:11:41.761 | 1.00th=[ 3982], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6456], 00:11:41.761 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:11:41.761 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9634], 00:11:41.761 | 99.00th=[11863], 99.50th=[13042], 99.90th=[20579], 99.95th=[22414], 00:11:41.761 | 99.99th=[23462] 00:11:41.761 bw ( KiB/s): min= 6696, max=27408, per=87.49%, avg=21632.64, stdev=5798.50, samples=11 00:11:41.761 iops : min= 1674, max= 6852, avg=5408.09, stdev=1449.62, samples=11 00:11:41.761 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:41.761 lat (msec) : 2=0.01%, 4=0.46%, 10=90.64%, 20=8.80%, 50=0.09% 00:11:41.761 cpu : usr=5.43%, sys=22.68%, ctx=6388, majf=0, minf=108 00:11:41.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:41.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.761 issued rwts: total=62041,33386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.761 00:11:41.761 Run status group 0 (all jobs): 00:11:41.761 READ: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6006-6006msec 00:11:41.761 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=130MiB (137MB), run=5401-5401msec 00:11:41.761 00:11:41.761 Disk stats (read/write): 00:11:41.761 nvme0n1: ios=61355/32475, merge=0/0, ticks=482151/220442, in_queue=702593, util=98.61% 00:11:41.761 14:51:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:41.761 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:42.019 14:51:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75674 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:42.954 14:51:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:42.954 [global] 00:11:42.954 thread=1 00:11:42.954 invalidate=1 00:11:42.954 rw=randrw 00:11:42.954 time_based=1 00:11:42.954 runtime=6 00:11:42.954 ioengine=libaio 00:11:42.954 direct=1 00:11:42.954 bs=4096 00:11:42.954 iodepth=128 00:11:42.954 norandommap=0 00:11:42.954 numjobs=1 00:11:42.954 00:11:42.954 verify_dump=1 00:11:42.954 verify_backlog=512 00:11:42.954 verify_state_save=0 00:11:42.954 do_verify=1 00:11:42.954 verify=crc32c-intel 00:11:42.954 [job0] 00:11:42.954 filename=/dev/nvme0n1 00:11:42.954 Could not set queue depth (nvme0n1) 00:11:43.212 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.212 fio-3.35 00:11:43.212 Starting 1 thread 00:11:44.147 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:44.147 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:44.404 14:51:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:45.779 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:45.779 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:45.779 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:45.779 14:51:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:45.779 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:46.038 14:51:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:46.974 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:46.974 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:46.974 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:46.974 14:51:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75674 00:11:49.505 00:11:49.505 job0: (groupid=0, jobs=1): err= 0: pid=75695: Fri Jul 12 14:51:27 2024 00:11:49.505 read: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(277MiB/6004msec) 00:11:49.505 slat (usec): min=3, max=7007, avg=44.35, stdev=220.17 00:11:49.505 clat (usec): min=330, max=44493, avg=7501.60, stdev=1888.82 00:11:49.505 lat (usec): min=346, max=44501, avg=7545.95, stdev=1906.62 00:11:49.505 clat percentiles (usec): 00:11:49.505 | 1.00th=[ 2704], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5997], 00:11:49.505 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:49.505 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10683], 00:11:49.505 | 99.00th=[12649], 99.50th=[13566], 99.90th=[16188], 99.95th=[16712], 00:11:49.505 | 99.99th=[17695] 00:11:49.505 bw ( KiB/s): min= 8768, max=42504, per=53.75%, avg=25367.27, stdev=9697.17, samples=11 00:11:49.505 iops : min= 2192, max=10626, avg=6341.82, stdev=2424.29, samples=11 00:11:49.505 write: IOPS=6932, BW=27.1MiB/s (28.4MB/s)(146MiB/5387msec); 0 zone resets 00:11:49.505 slat (usec): min=13, max=2966, avg=54.98, stdev=134.15 00:11:49.505 clat (usec): min=209, max=17038, avg=6154.84, stdev=1796.35 00:11:49.505 lat (usec): min=248, max=17066, avg=6209.82, stdev=1808.11 00:11:49.505 clat percentiles (usec): 00:11:49.505 | 1.00th=[ 1631], 5.00th=[ 3228], 10.00th=[ 3720], 20.00th=[ 4424], 00:11:49.505 | 30.00th=[ 5211], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6849], 00:11:49.505 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8455], 00:11:49.505 | 99.00th=[10814], 99.50th=[11731], 99.90th=[15401], 99.95th=[15795], 00:11:49.505 | 99.99th=[15926] 00:11:49.505 bw ( KiB/s): min= 9208, max=41624, per=91.35%, avg=25333.82, stdev=9464.20, samples=11 00:11:49.505 iops : min= 2302, max=10406, avg=6333.45, stdev=2366.05, samples=11 00:11:49.505 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.11% 00:11:49.505 lat (msec) : 2=0.57%, 4=6.21%, 10=87.91%, 20=5.15%, 50=0.01% 00:11:49.505 cpu : usr=5.96%, sys=26.34%, ctx=8297, majf=0, minf=108 00:11:49.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:49.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.505 issued rwts: total=70835,37348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.505 00:11:49.505 Run status group 0 (all jobs): 00:11:49.505 READ: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=277MiB (290MB), run=6004-6004msec 00:11:49.505 WRITE: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=146MiB (153MB), run=5387-5387msec 00:11:49.505 00:11:49.505 Disk stats (read/write): 00:11:49.505 nvme0n1: ios=69736/37101, merge=0/0, ticks=479294/204061, in_queue=683355, util=98.56% 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:49.505 14:51:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.505 14:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.764 rmmod nvme_tcp 00:11:49.764 rmmod nvme_fabrics 00:11:49.764 rmmod nvme_keyring 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75379 ']' 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75379 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75379 ']' 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75379 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75379 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.764 killing process with pid 75379 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75379' 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75379 00:11:49.764 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75379 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:50.023 00:11:50.023 real 0m20.493s 00:11:50.023 user 1m20.783s 00:11:50.023 sys 0m6.452s 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.023 14:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:50.023 ************************************ 00:11:50.023 END TEST nvmf_target_multipath 00:11:50.023 ************************************ 00:11:50.023 14:51:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.023 14:51:28 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:50.023 14:51:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.023 14:51:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.023 14:51:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.023 ************************************ 00:11:50.023 START TEST nvmf_zcopy 00:11:50.023 ************************************ 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:50.023 * Looking for test storage... 00:11:50.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:50.023 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:50.024 Cannot find device "nvmf_tgt_br" 00:11:50.024 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:50.024 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.282 Cannot find device "nvmf_tgt_br2" 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:50.282 Cannot find device "nvmf_tgt_br" 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:50.282 Cannot find device "nvmf_tgt_br2" 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:50.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:50.282 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:50.283 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:50.283 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:50.283 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:50.283 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:50.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:50.542 00:11:50.542 --- 10.0.0.2 ping statistics --- 00:11:50.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.542 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:50.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:50.542 00:11:50.542 --- 10.0.0.3 ping statistics --- 00:11:50.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.542 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:50.542 00:11:50.542 --- 10.0.0.1 ping statistics --- 00:11:50.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.542 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.542 14:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75978 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75978 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 75978 ']' 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.542 14:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:50.542 [2024-07-12 14:51:29.094158] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:50.542 [2024-07-12 14:51:29.094290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.801 [2024-07-12 14:51:29.247486] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.801 [2024-07-12 14:51:29.322428] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.801 [2024-07-12 14:51:29.322491] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.801 [2024-07-12 14:51:29.322505] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.802 [2024-07-12 14:51:29.322536] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.802 [2024-07-12 14:51:29.322547] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.802 [2024-07-12 14:51:29.322576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 [2024-07-12 14:51:30.154506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 [2024-07-12 14:51:30.170603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 malloc0 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:51.737 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:51.738 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:51.738 { 00:11:51.738 "params": { 00:11:51.738 "name": "Nvme$subsystem", 00:11:51.738 "trtype": "$TEST_TRANSPORT", 00:11:51.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:51.738 "adrfam": "ipv4", 00:11:51.738 "trsvcid": "$NVMF_PORT", 00:11:51.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:51.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:51.738 "hdgst": ${hdgst:-false}, 00:11:51.738 "ddgst": ${ddgst:-false} 00:11:51.738 }, 00:11:51.738 "method": "bdev_nvme_attach_controller" 00:11:51.738 } 00:11:51.738 EOF 00:11:51.738 )") 00:11:51.738 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:51.738 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:51.738 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:51.738 14:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:51.738 "params": { 00:11:51.738 "name": "Nvme1", 00:11:51.738 "trtype": "tcp", 00:11:51.738 "traddr": "10.0.0.2", 00:11:51.738 "adrfam": "ipv4", 00:11:51.738 "trsvcid": "4420", 00:11:51.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:51.738 "hdgst": false, 00:11:51.738 "ddgst": false 00:11:51.738 }, 00:11:51.738 "method": "bdev_nvme_attach_controller" 00:11:51.738 }' 00:11:51.738 [2024-07-12 14:51:30.261506] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:11:51.738 [2024-07-12 14:51:30.261627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76029 ] 00:11:51.996 [2024-07-12 14:51:30.400705] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.996 [2024-07-12 14:51:30.474396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.996 Running I/O for 10 seconds... 00:12:04.209 00:12:04.210 Latency(us) 00:12:04.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:04.210 Verification LBA range: start 0x0 length 0x1000 00:12:04.210 Nvme1n1 : 10.01 5878.32 45.92 0.00 0.00 21703.24 927.19 32887.16 00:12:04.210 =================================================================================================================== 00:12:04.210 Total : 5878.32 45.92 0.00 0.00 21703.24 927.19 32887.16 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76147 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:04.210 { 00:12:04.210 "params": { 00:12:04.210 "name": "Nvme$subsystem", 00:12:04.210 "trtype": "$TEST_TRANSPORT", 00:12:04.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:04.210 "adrfam": "ipv4", 00:12:04.210 "trsvcid": "$NVMF_PORT", 00:12:04.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:04.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:04.210 "hdgst": ${hdgst:-false}, 00:12:04.210 "ddgst": ${ddgst:-false} 00:12:04.210 }, 00:12:04.210 "method": "bdev_nvme_attach_controller" 00:12:04.210 } 00:12:04.210 EOF 00:12:04.210 )") 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:04.210 [2024-07-12 14:51:40.803460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.803502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:04.210 14:51:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:04.210 "params": { 00:12:04.210 "name": "Nvme1", 00:12:04.210 "trtype": "tcp", 00:12:04.210 "traddr": "10.0.0.2", 00:12:04.210 "adrfam": "ipv4", 00:12:04.210 "trsvcid": "4420", 00:12:04.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.210 "hdgst": false, 00:12:04.210 "ddgst": false 00:12:04.210 }, 00:12:04.210 "method": "bdev_nvme_attach_controller" 00:12:04.210 }' 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.815446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.815482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.823429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.823461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.831426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.831459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.839437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.839472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.847437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.847473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.855439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.855472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.863441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.863476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 [2024-07-12 14:51:40.864130] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:12:04.210 [2024-07-12 14:51:40.864237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76147 ] 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.871451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.871485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.879448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.879481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.887486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.887548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.895466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.895503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.907493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.907547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.919582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.919651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.931500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.931561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.210 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.210 [2024-07-12 14:51:40.939469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.210 [2024-07-12 14:51:40.939502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:40.947464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:40.947499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:40.959486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:40.959537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:40.971491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:40.971544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:40.983493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:40.983547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:40.995507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:40.995559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.004046] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.211 [2024-07-12 14:51:41.007539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.007573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.019548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.019589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.031571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.031631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.043575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.043633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.055585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.055638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.067576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.067630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.079579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.079628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.091566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.091620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 [2024-07-12 14:51:41.092920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.099545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.099615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.111594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.111653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.123619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.123693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.135633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.135698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.147613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.147667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.159591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.159640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.171583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.171627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.183590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.183637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.195592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.195636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.207594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.207635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.215589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.215640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.223566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.223599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.231631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.211 [2024-07-12 14:51:41.231669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.211 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.211 [2024-07-12 14:51:41.239615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.239649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 Running I/O for 5 seconds... 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.254560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.254605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.270225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.270273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.287388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.287443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.303394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.303465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.312687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.312738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.327729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.327793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.338228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.338283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.352694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.352763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.369662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.369712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.380551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.380597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.392279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.392323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.404648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.404694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.421136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.421184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.437625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.437672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.454887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.454941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.470842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.470890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.480753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.480797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.497228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.497277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.512553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.512595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.524080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.524125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.538443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.538491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.554926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.554973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.571575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.571625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.212 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.212 [2024-07-12 14:51:41.589033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.212 [2024-07-12 14:51:41.589081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.606471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.606535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.622682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.622730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.639191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.639238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.655935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.656005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.666632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.666690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.679005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.679063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.690568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.690612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.702717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.702761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.716325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.716370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.728119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.728164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.739677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.739721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.755951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.756024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.773029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.773082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.784250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.784298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.799574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.799624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.816985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.817037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.833130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.833199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.844345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.844394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.860334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.860390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.875813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.875863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.893031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.893083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.909395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.909446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.926361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.926430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.942253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.942307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.952207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.952254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.967495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.967574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.213 [2024-07-12 14:51:41.984189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.213 [2024-07-12 14:51:41.984235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.213 2024/07/12 14:51:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.000808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.000853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.017930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.017979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.033727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.033772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.051859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.051912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.066847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.066895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.082313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.082361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.099309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.099363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.115839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.115888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.132643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.132694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.150865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.150937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.167015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.167068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.182982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.183033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.193752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.193807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.205233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.205279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.219118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.219165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.235722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.235771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.252069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.252117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.262767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.262834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.274209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.274255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.285558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.285600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.214 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.214 [2024-07-12 14:51:42.296593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.214 [2024-07-12 14:51:42.296635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.309224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.309271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.319795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.319838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.330860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.330907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.343584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.343627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.354015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.354060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.364980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.365025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.376132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.376182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.387064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.387109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.399982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.400035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.410637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.410680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.425721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.425770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.436036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.436082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.446463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.446509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.457396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.457443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.470011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.470059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.479913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.479970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.491282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.491331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.504330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.504377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.520499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.520560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.536120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.536170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.546795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.546842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.561889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.561939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.578069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.578123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.588551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.588604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.603425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.603489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.613965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.614008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.215 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.215 [2024-07-12 14:51:42.624963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.215 [2024-07-12 14:51:42.625009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.637819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.637864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.648532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.648578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.659700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.659775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.676722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.676773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.687304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.687362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.700010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.700073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.716991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.717055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.732946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.733009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.745323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.745386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.757696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.757754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.771708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.771766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.785895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.785957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.800508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.800602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.817543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.817609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.833549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.833610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.216 [2024-07-12 14:51:42.852784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.216 [2024-07-12 14:51:42.852861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.216 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.868795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.868846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.879357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.879402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.894406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.894450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.911204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.911254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.921711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.921756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.936438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.936492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.953509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.953586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.969938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.969996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:42.987451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:42.987500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.003055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.003106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.019631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.019682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.035759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.035811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.051647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.051702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.067700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.067750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.083652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.083700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.100193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.100240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.474 [2024-07-12 14:51:43.116816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.474 [2024-07-12 14:51:43.116862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.474 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.133703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.133749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.149471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.149526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.159643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.159685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.174590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.174646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.191194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.191240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.732 [2024-07-12 14:51:43.207710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.732 [2024-07-12 14:51:43.207763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.732 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.227376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.227427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.242449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.242504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.252308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.252353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.264662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.264712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.280454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.280505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.297026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.297077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.312299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.312374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.330656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.330710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.341675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.341724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.352870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.352917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.368782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.368831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.733 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.733 [2024-07-12 14:51:43.384911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.733 [2024-07-12 14:51:43.384958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.400604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.400656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.416103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.416151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.427025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.427068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.438699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.438744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.454042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.454096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.472508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.472594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.488652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.488714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.505488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.505547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.521835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.521881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.540290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.540344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.555796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.555845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.567502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.567558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.585356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.585413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.600259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.600327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.610603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.610647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.622917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.622959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.991 [2024-07-12 14:51:43.636969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.991 [2024-07-12 14:51:43.637017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.991 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.249 [2024-07-12 14:51:43.653006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.249 [2024-07-12 14:51:43.653062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.249 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.249 [2024-07-12 14:51:43.663309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.249 [2024-07-12 14:51:43.663350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.249 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.249 [2024-07-12 14:51:43.675630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.249 [2024-07-12 14:51:43.675693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.249 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.249 [2024-07-12 14:51:43.690501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.249 [2024-07-12 14:51:43.690600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.249 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.706467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.706556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.726088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.726194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.741509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.741620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.760419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.760505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.777788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.777860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.795379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.795469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.809251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.809326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.828293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.828396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.843724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.843796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.861596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.861673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.879068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.879133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.250 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.250 [2024-07-12 14:51:43.898064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.250 [2024-07-12 14:51:43.898135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.910743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.910787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.924693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.924735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.941764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.941810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.952335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.952392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.968715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.968787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.983904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.983973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:43.995269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:43.995345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.013021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.013086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.027732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.027797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.045025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.045090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.060480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.060549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.071115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.071163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.086294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.086367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.103941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.104028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.120649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.120719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.137725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.137790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.509 [2024-07-12 14:51:44.149289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.509 [2024-07-12 14:51:44.149353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.509 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.163572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.163644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.180783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.180853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.194782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.194850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.211603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.211657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.226446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.226540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.242634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.242690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.253696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.253759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.265435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.265493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.282007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.282059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.290903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.290956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.306773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.306836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.323438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.323487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.340643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.340695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.356717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.356772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.366903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.366953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.379251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.379316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.395040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.395103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.768 [2024-07-12 14:51:44.405920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.768 [2024-07-12 14:51:44.405974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.768 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.421311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.421376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.438028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.438083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.452909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.452967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.463553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.463607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.478992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.479068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.494364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.494435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.512570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.512645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.527613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.527684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.543982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.544059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.027 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.027 [2024-07-12 14:51:44.559059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.027 [2024-07-12 14:51:44.559131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.575790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.575865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.592890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.592956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.609342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.609411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.626731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.626799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.642804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.642862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.659488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.659548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.028 [2024-07-12 14:51:44.676231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.028 [2024-07-12 14:51:44.676292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.028 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.286 [2024-07-12 14:51:44.692121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.692177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.709554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.709605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.724955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.725004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.735604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.735668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.747719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.747766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.764419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.764491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.782492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.782563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.798138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.798205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.812605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.812675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.830208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.830281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.844996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.845078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.860622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.860684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.876573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.876624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.887197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.887243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.901765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.901821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.911864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.911909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.287 [2024-07-12 14:51:44.926435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.287 [2024-07-12 14:51:44.926492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.287 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:44.944911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:44.944959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:44.960027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:44.960080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:44.970321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:44.970369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:44.981205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:44.981252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:44.993673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:44.993728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.003459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.003505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.014692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.014739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.025344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.025389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.036257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.036302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.046837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.046879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.057826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.057870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.068570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.068610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.081065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.081107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.097666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.097716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.546 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.546 [2024-07-12 14:51:45.113685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.546 [2024-07-12 14:51:45.113735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.123031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.123070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.139604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.139648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.149177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.149220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.160968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.161021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.176556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.176610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.547 [2024-07-12 14:51:45.186910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.547 [2024-07-12 14:51:45.186957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.547 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.201757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.201825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.218757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.218826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.235245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.235310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.252001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.252051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.267485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.267547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.279512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.279569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.297196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.297245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.312239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.312285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.322074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.322124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.337181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.337235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.347681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.347727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.358827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.358875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.376165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.376227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.386662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.386705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.397565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.397618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.410476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.410544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.426492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.426560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.443683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.443737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.806 [2024-07-12 14:51:45.453838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.806 [2024-07-12 14:51:45.453885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.806 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.469404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.469459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.486655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.486886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.502247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.502467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.513444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.513505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.527734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.527786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.544087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.544141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.560725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.560778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.577782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.577837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.594431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.594487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.609556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.609613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.619429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.619479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.635235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.635310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.652833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.652905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.669939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.670019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.684387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.684461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.701089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.701174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.066 [2024-07-12 14:51:45.716771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.066 [2024-07-12 14:51:45.716844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.066 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.730007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.730079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.746235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.746306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.759535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.759610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.776754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.776807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.787011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.787050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.797686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.797727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.810600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.810649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.827838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.827889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.838162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.838207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.848783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.848828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.859765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.859809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.874701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.874754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.890549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.890606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.908268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.908355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.926025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.926078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.937377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.937424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.952580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.952653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.324 [2024-07-12 14:51:45.970397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.324 [2024-07-12 14:51:45.970471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.324 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:45.989033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:45.989111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.006186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.006259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.018762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.018826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.032613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.032683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.049594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.049672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.066973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.067053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.084073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.084123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.101091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.101160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.117448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.117529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.134467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.134548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.151467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.151547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.170144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.170206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.187755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.187825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.205128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.205192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.582 [2024-07-12 14:51:46.221186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.582 [2024-07-12 14:51:46.221252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.582 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.237193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.237262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.249602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.249650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 00:12:07.841 Latency(us) 00:12:07.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.841 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:07.841 Nvme1n1 : 5.01 10689.24 83.51 0.00 0.00 11960.00 3157.64 24784.52 00:12:07.841 =================================================================================================================== 00:12:07.841 Total : 10689.24 83.51 0.00 0.00 11960.00 3157.64 24784.52 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.261595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.261638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.273619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.273678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.285640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.285700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.297645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.297707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.309745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.309837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.321653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.321713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.333618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.333665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.341602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.341648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.353702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.353775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.365665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.365720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.373633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.373693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.381631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.381672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.393675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.393729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.405642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.405687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 [2024-07-12 14:51:46.417654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.841 [2024-07-12 14:51:46.417701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.841 2024/07/12 14:51:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.841 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76147) - No such process 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76147 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:07.841 delay0 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.841 14:51:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:08.099 [2024-07-12 14:51:46.607763] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:14.648 Initializing NVMe Controllers 00:12:14.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:14.648 Initialization complete. Launching workers. 00:12:14.648 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 5906 00:12:14.648 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6130, failed to submit 67 00:12:14.648 success 5990, unsuccess 140, failed 0 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.648 rmmod nvme_tcp 00:12:14.648 rmmod nvme_fabrics 00:12:14.648 rmmod nvme_keyring 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75978 ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75978 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 75978 ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 75978 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75978 00:12:14.648 killing process with pid 75978 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75978' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 75978 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 75978 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:14.648 ************************************ 00:12:14.648 END TEST nvmf_zcopy 00:12:14.648 ************************************ 00:12:14.648 00:12:14.648 real 0m24.454s 00:12:14.648 user 0m39.093s 00:12:14.648 sys 0m6.818s 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.648 14:51:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:14.648 14:51:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:14.648 14:51:53 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:14.648 14:51:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.648 14:51:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.648 14:51:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.648 ************************************ 00:12:14.648 START TEST nvmf_nmic 00:12:14.648 ************************************ 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:14.648 * Looking for test storage... 00:12:14.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.648 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:14.649 Cannot find device "nvmf_tgt_br" 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.649 Cannot find device "nvmf_tgt_br2" 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:14.649 Cannot find device "nvmf_tgt_br" 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:14.649 Cannot find device "nvmf_tgt_br2" 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.649 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:14.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:12:14.908 00:12:14.908 --- 10.0.0.2 ping statistics --- 00:12:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.908 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:14.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:14.908 00:12:14.908 --- 10.0.0.3 ping statistics --- 00:12:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.908 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:14.908 00:12:14.908 --- 10.0.0.1 ping statistics --- 00:12:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.908 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76471 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76471 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76471 ']' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.908 14:51:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:14.908 [2024-07-12 14:51:53.543670] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:12:14.908 [2024-07-12 14:51:53.543781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.166 [2024-07-12 14:51:53.680120] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.166 [2024-07-12 14:51:53.751227] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.166 [2024-07-12 14:51:53.751467] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.166 [2024-07-12 14:51:53.751595] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.166 [2024-07-12 14:51:53.751686] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.166 [2024-07-12 14:51:53.751760] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.166 [2024-07-12 14:51:53.751948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.166 [2024-07-12 14:51:53.752030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.166 [2024-07-12 14:51:53.752633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.166 [2024-07-12 14:51:53.752637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 [2024-07-12 14:51:54.527537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 Malloc0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 [2024-07-12 14:51:54.583900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 test case1: single bdev can't be used in multiple subsystems 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 [2024-07-12 14:51:54.607777] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:16.098 [2024-07-12 14:51:54.607823] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:16.098 [2024-07-12 14:51:54.607836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.098 2024/07/12 14:51:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:16.098 request: 00:12:16.098 { 00:12:16.098 "method": "nvmf_subsystem_add_ns", 00:12:16.098 "params": { 00:12:16.098 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.098 "namespace": { 00:12:16.098 "bdev_name": "Malloc0", 00:12:16.098 "no_auto_visible": false 00:12:16.098 } 00:12:16.098 } 00:12:16.098 } 00:12:16.098 Got JSON-RPC error response 00:12:16.098 GoRPCClient: error on JSON-RPC call 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:16.098 Adding namespace failed - expected result. 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:16.098 test case2: host connect to nvmf target in multiple paths 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.098 [2024-07-12 14:51:54.619990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:16.098 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.099 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:16.356 14:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:18.883 14:51:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:18.883 [global] 00:12:18.883 thread=1 00:12:18.883 invalidate=1 00:12:18.883 rw=write 00:12:18.883 time_based=1 00:12:18.883 runtime=1 00:12:18.883 ioengine=libaio 00:12:18.883 direct=1 00:12:18.883 bs=4096 00:12:18.883 iodepth=1 00:12:18.883 norandommap=0 00:12:18.883 numjobs=1 00:12:18.883 00:12:18.883 verify_dump=1 00:12:18.883 verify_backlog=512 00:12:18.883 verify_state_save=0 00:12:18.883 do_verify=1 00:12:18.883 verify=crc32c-intel 00:12:18.883 [job0] 00:12:18.883 filename=/dev/nvme0n1 00:12:18.883 Could not set queue depth (nvme0n1) 00:12:18.883 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.883 fio-3.35 00:12:18.883 Starting 1 thread 00:12:19.816 00:12:19.816 job0: (groupid=0, jobs=1): err= 0: pid=76575: Fri Jul 12 14:51:58 2024 00:12:19.816 read: IOPS=2880, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:12:19.816 slat (usec): min=13, max=110, avg=21.63, stdev= 9.28 00:12:19.816 clat (usec): min=129, max=7767, avg=167.73, stdev=210.86 00:12:19.816 lat (usec): min=145, max=7792, avg=189.37, stdev=211.44 00:12:19.816 clat percentiles (usec): 00:12:19.816 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:12:19.816 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:12:19.816 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 200], 00:12:19.816 | 99.00th=[ 247], 99.50th=[ 265], 99.90th=[ 3621], 99.95th=[ 7439], 00:12:19.816 | 99.99th=[ 7767] 00:12:19.816 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:19.816 slat (usec): min=19, max=136, avg=28.51, stdev=10.85 00:12:19.816 clat (usec): min=88, max=7399, avg=114.89, stdev=141.11 00:12:19.816 lat (usec): min=111, max=7427, avg=143.40, stdev=141.74 00:12:19.816 clat percentiles (usec): 00:12:19.816 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 101], 00:12:19.816 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 112], 00:12:19.816 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 143], 00:12:19.816 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 245], 99.95th=[ 2835], 00:12:19.816 | 99.99th=[ 7373] 00:12:19.816 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:12:19.816 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:19.816 lat (usec) : 100=8.51%, 250=91.00%, 500=0.35%, 750=0.02%, 1000=0.02% 00:12:19.816 lat (msec) : 2=0.02%, 4=0.03%, 10=0.05% 00:12:19.816 cpu : usr=3.40%, sys=10.60%, ctx=5972, majf=0, minf=2 00:12:19.816 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.816 issued rwts: total=2883,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.816 00:12:19.816 Run status group 0 (all jobs): 00:12:19.816 READ: bw=11.2MiB/s (11.8MB/s), 11.2MiB/s-11.2MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.8MB), run=1001-1001msec 00:12:19.816 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:12:19.816 00:12:19.816 Disk stats (read/write): 00:12:19.816 nvme0n1: ios=2610/2749, merge=0/0, ticks=468/343, in_queue=811, util=90.38% 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.816 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.075 rmmod nvme_tcp 00:12:20.075 rmmod nvme_fabrics 00:12:20.075 rmmod nvme_keyring 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76471 ']' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76471 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76471 ']' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76471 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76471 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.075 killing process with pid 76471 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76471' 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76471 00:12:20.075 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76471 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:20.333 00:12:20.333 real 0m5.739s 00:12:20.333 user 0m19.448s 00:12:20.333 sys 0m1.289s 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.333 ************************************ 00:12:20.333 END TEST nvmf_nmic 00:12:20.333 ************************************ 00:12:20.333 14:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.333 14:51:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:20.333 14:51:58 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:20.333 14:51:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:20.333 14:51:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.333 14:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.333 ************************************ 00:12:20.333 START TEST nvmf_fio_target 00:12:20.333 ************************************ 00:12:20.333 14:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:20.333 * Looking for test storage... 00:12:20.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:20.334 Cannot find device "nvmf_tgt_br" 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.334 Cannot find device "nvmf_tgt_br2" 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:20.334 Cannot find device "nvmf_tgt_br" 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:20.334 Cannot find device "nvmf_tgt_br2" 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:20.334 14:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:20.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:12:20.592 00:12:20.592 --- 10.0.0.2 ping statistics --- 00:12:20.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.592 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:20.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:12:20.592 00:12:20.592 --- 10.0.0.3 ping statistics --- 00:12:20.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.592 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:20.592 00:12:20.592 --- 10.0.0.1 ping statistics --- 00:12:20.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.592 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.592 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76755 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76755 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76755 ']' 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.851 14:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.851 [2024-07-12 14:51:59.323782] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:12:20.851 [2024-07-12 14:51:59.323913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.851 [2024-07-12 14:51:59.466229] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.108 [2024-07-12 14:51:59.527599] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.108 [2024-07-12 14:51:59.527653] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.109 [2024-07-12 14:51:59.527664] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.109 [2024-07-12 14:51:59.527672] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.109 [2024-07-12 14:51:59.527679] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.109 [2024-07-12 14:51:59.527757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.109 [2024-07-12 14:51:59.528979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.109 [2024-07-12 14:51:59.529034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.109 [2024-07-12 14:51:59.529044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.674 14:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.674 14:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:21.674 14:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.674 14:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.674 14:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.932 14:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.932 14:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:22.189 [2024-07-12 14:52:00.655098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.189 14:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.446 14:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:22.446 14:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.008 14:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:23.008 14:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.266 14:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:23.266 14:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.523 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:23.523 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:23.795 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.052 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:24.052 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.326 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:24.326 14:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.582 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:24.582 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:24.840 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.106 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:25.106 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:25.388 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:25.388 14:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.646 14:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.211 [2024-07-12 14:52:04.659059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.211 14:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:26.468 14:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:26.726 14:52:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:29.253 14:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:29.253 [global] 00:12:29.253 thread=1 00:12:29.253 invalidate=1 00:12:29.253 rw=write 00:12:29.253 time_based=1 00:12:29.253 runtime=1 00:12:29.253 ioengine=libaio 00:12:29.253 direct=1 00:12:29.253 bs=4096 00:12:29.253 iodepth=1 00:12:29.253 norandommap=0 00:12:29.253 numjobs=1 00:12:29.253 00:12:29.253 verify_dump=1 00:12:29.253 verify_backlog=512 00:12:29.253 verify_state_save=0 00:12:29.253 do_verify=1 00:12:29.253 verify=crc32c-intel 00:12:29.253 [job0] 00:12:29.253 filename=/dev/nvme0n1 00:12:29.253 [job1] 00:12:29.253 filename=/dev/nvme0n2 00:12:29.253 [job2] 00:12:29.253 filename=/dev/nvme0n3 00:12:29.253 [job3] 00:12:29.253 filename=/dev/nvme0n4 00:12:29.253 Could not set queue depth (nvme0n1) 00:12:29.253 Could not set queue depth (nvme0n2) 00:12:29.253 Could not set queue depth (nvme0n3) 00:12:29.253 Could not set queue depth (nvme0n4) 00:12:29.253 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.253 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.253 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.253 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.253 fio-3.35 00:12:29.253 Starting 4 threads 00:12:30.185 00:12:30.185 job0: (groupid=0, jobs=1): err= 0: pid=77058: Fri Jul 12 14:52:08 2024 00:12:30.185 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:30.185 slat (nsec): min=12244, max=62900, avg=27305.41, stdev=7505.71 00:12:30.185 clat (usec): min=220, max=41153, avg=541.63, stdev=1280.00 00:12:30.185 lat (usec): min=236, max=41167, avg=568.93, stdev=1279.60 00:12:30.185 clat percentiles (usec): 00:12:30.185 | 1.00th=[ 314], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 388], 00:12:30.185 | 30.00th=[ 416], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 502], 00:12:30.185 | 70.00th=[ 562], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 717], 00:12:30.185 | 99.00th=[ 766], 99.50th=[ 1057], 99.90th=[ 3261], 99.95th=[41157], 00:12:30.185 | 99.99th=[41157] 00:12:30.185 write: IOPS=1217, BW=4871KiB/s (4988kB/s)(4876KiB/1001msec); 0 zone resets 00:12:30.185 slat (usec): min=16, max=247, avg=37.18, stdev=12.60 00:12:30.185 clat (usec): min=113, max=656, avg=299.71, stdev=67.37 00:12:30.185 lat (usec): min=157, max=693, avg=336.88, stdev=65.81 00:12:30.185 clat percentiles (usec): 00:12:30.185 | 1.00th=[ 129], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 260], 00:12:30.185 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:12:30.185 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 408], 95.00th=[ 437], 00:12:30.185 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 660], 00:12:30.185 | 99.99th=[ 660] 00:12:30.185 bw ( KiB/s): min= 5512, max= 5512, per=22.28%, avg=5512.00, stdev= 0.00, samples=1 00:12:30.185 iops : min= 1378, max= 1378, avg=1378.00, stdev= 0.00, samples=1 00:12:30.185 lat (usec) : 250=7.58%, 500=73.34%, 750=18.37%, 1000=0.40% 00:12:30.185 lat (msec) : 2=0.22%, 4=0.04%, 50=0.04% 00:12:30.185 cpu : usr=2.60%, sys=4.90%, ctx=2245, majf=0, minf=7 00:12:30.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.185 issued rwts: total=1024,1219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.185 job1: (groupid=0, jobs=1): err= 0: pid=77059: Fri Jul 12 14:52:08 2024 00:12:30.185 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:30.185 slat (nsec): min=27603, max=56534, avg=30013.96, stdev=3426.45 00:12:30.185 clat (usec): min=194, max=2168, avg=226.72, stdev=45.64 00:12:30.185 lat (usec): min=227, max=2197, avg=256.74, stdev=45.72 00:12:30.185 clat percentiles (usec): 00:12:30.185 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:12:30.186 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:12:30.186 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:12:30.186 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 338], 99.95th=[ 578], 00:12:30.186 | 99.99th=[ 2180] 00:12:30.186 write: IOPS=2215, BW=8863KiB/s (9076kB/s)(8872KiB/1001msec); 0 zone resets 00:12:30.186 slat (usec): min=37, max=128, avg=41.13, stdev= 4.78 00:12:30.186 clat (usec): min=136, max=230, avg=166.18, stdev=12.24 00:12:30.186 lat (usec): min=180, max=354, avg=207.31, stdev=12.93 00:12:30.186 clat percentiles (usec): 00:12:30.186 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:12:30.186 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:12:30.186 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:12:30.186 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 227], 99.95th=[ 229], 00:12:30.186 | 99.99th=[ 231] 00:12:30.186 bw ( KiB/s): min= 8784, max= 8784, per=35.51%, avg=8784.00, stdev= 0.00, samples=1 00:12:30.186 iops : min= 2196, max= 2196, avg=2196.00, stdev= 0.00, samples=1 00:12:30.186 lat (usec) : 250=97.73%, 500=2.23%, 750=0.02% 00:12:30.186 lat (msec) : 4=0.02% 00:12:30.186 cpu : usr=2.70%, sys=12.00%, ctx=4268, majf=0, minf=8 00:12:30.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 issued rwts: total=2048,2218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.186 job2: (groupid=0, jobs=1): err= 0: pid=77060: Fri Jul 12 14:52:08 2024 00:12:30.186 read: IOPS=1094, BW=4380KiB/s (4485kB/s)(4384KiB/1001msec) 00:12:30.186 slat (usec): min=14, max=103, avg=34.46, stdev=12.89 00:12:30.186 clat (usec): min=151, max=844, avg=421.40, stdev=152.99 00:12:30.186 lat (usec): min=167, max=893, avg=455.86, stdev=160.17 00:12:30.186 clat percentiles (usec): 00:12:30.186 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 334], 00:12:30.186 | 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 416], 60.00th=[ 445], 00:12:30.186 | 70.00th=[ 482], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 734], 00:12:30.186 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 848], 99.95th=[ 848], 00:12:30.186 | 99.99th=[ 848] 00:12:30.186 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:30.186 slat (usec): min=23, max=117, avg=44.54, stdev=13.37 00:12:30.186 clat (usec): min=119, max=907, avg=275.20, stdev=77.44 00:12:30.186 lat (usec): min=151, max=932, avg=319.74, stdev=78.52 00:12:30.186 clat percentiles (usec): 00:12:30.186 | 1.00th=[ 126], 5.00th=[ 137], 10.00th=[ 151], 20.00th=[ 235], 00:12:30.186 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:12:30.186 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 388], 00:12:30.186 | 99.00th=[ 498], 99.50th=[ 578], 99.90th=[ 725], 99.95th=[ 906], 00:12:30.186 | 99.99th=[ 906] 00:12:30.186 bw ( KiB/s): min= 8192, max= 8192, per=33.11%, avg=8192.00, stdev= 0.00, samples=1 00:12:30.186 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:30.186 lat (usec) : 250=23.18%, 500=64.97%, 750=10.18%, 1000=1.67% 00:12:30.186 cpu : usr=2.00%, sys=8.20%, ctx=2632, majf=0, minf=11 00:12:30.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 issued rwts: total=1096,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.186 job3: (groupid=0, jobs=1): err= 0: pid=77061: Fri Jul 12 14:52:08 2024 00:12:30.186 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:30.186 slat (nsec): min=10347, max=61614, avg=24459.00, stdev=6843.21 00:12:30.186 clat (usec): min=225, max=41123, avg=545.15, stdev=1279.55 00:12:30.186 lat (usec): min=240, max=41160, avg=569.61, stdev=1280.02 00:12:30.186 clat percentiles (usec): 00:12:30.186 | 1.00th=[ 302], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 392], 00:12:30.186 | 30.00th=[ 420], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 506], 00:12:30.186 | 70.00th=[ 562], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 709], 00:12:30.186 | 99.00th=[ 799], 99.50th=[ 1029], 99.90th=[ 3490], 99.95th=[41157], 00:12:30.186 | 99.99th=[41157] 00:12:30.186 write: IOPS=1216, BW=4867KiB/s (4984kB/s)(4872KiB/1001msec); 0 zone resets 00:12:30.186 slat (nsec): min=20901, max=94112, avg=41719.45, stdev=8974.36 00:12:30.186 clat (usec): min=118, max=632, avg=294.70, stdev=85.44 00:12:30.186 lat (usec): min=158, max=679, avg=336.42, stdev=84.26 00:12:30.186 clat percentiles (usec): 00:12:30.186 | 1.00th=[ 130], 5.00th=[ 186], 10.00th=[ 221], 20.00th=[ 249], 00:12:30.186 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 285], 00:12:30.186 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 408], 95.00th=[ 529], 00:12:30.186 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 627], 99.95th=[ 635], 00:12:30.186 | 99.99th=[ 635] 00:12:30.186 bw ( KiB/s): min= 5520, max= 5520, per=22.31%, avg=5520.00, stdev= 0.00, samples=1 00:12:30.186 iops : min= 1380, max= 1380, avg=1380.00, stdev= 0.00, samples=1 00:12:30.186 lat (usec) : 250=11.42%, 500=66.90%, 750=21.05%, 1000=0.36% 00:12:30.186 lat (msec) : 2=0.18%, 4=0.04%, 50=0.04% 00:12:30.186 cpu : usr=1.80%, sys=5.80%, ctx=2242, majf=0, minf=9 00:12:30.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.186 issued rwts: total=1024,1218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.186 00:12:30.186 Run status group 0 (all jobs): 00:12:30.186 READ: bw=20.3MiB/s (21.2MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=20.3MiB (21.3MB), run=1001-1001msec 00:12:30.186 WRITE: bw=24.2MiB/s (25.3MB/s), 4867KiB/s-8863KiB/s (4984kB/s-9076kB/s), io=24.2MiB (25.4MB), run=1001-1001msec 00:12:30.186 00:12:30.186 Disk stats (read/write): 00:12:30.186 nvme0n1: ios=976/1024, merge=0/0, ticks=521/281, in_queue=802, util=87.07% 00:12:30.186 nvme0n2: ios=1627/2048, merge=0/0, ticks=470/373, in_queue=843, util=91.23% 00:12:30.186 nvme0n3: ios=1024/1270, merge=0/0, ticks=416/365, in_queue=781, util=88.74% 00:12:30.186 nvme0n4: ios=966/1024, merge=0/0, ticks=514/302, in_queue=816, util=90.66% 00:12:30.186 14:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:30.186 [global] 00:12:30.186 thread=1 00:12:30.186 invalidate=1 00:12:30.186 rw=randwrite 00:12:30.186 time_based=1 00:12:30.186 runtime=1 00:12:30.186 ioengine=libaio 00:12:30.186 direct=1 00:12:30.186 bs=4096 00:12:30.186 iodepth=1 00:12:30.186 norandommap=0 00:12:30.186 numjobs=1 00:12:30.186 00:12:30.186 verify_dump=1 00:12:30.186 verify_backlog=512 00:12:30.186 verify_state_save=0 00:12:30.186 do_verify=1 00:12:30.186 verify=crc32c-intel 00:12:30.186 [job0] 00:12:30.186 filename=/dev/nvme0n1 00:12:30.186 [job1] 00:12:30.186 filename=/dev/nvme0n2 00:12:30.186 [job2] 00:12:30.186 filename=/dev/nvme0n3 00:12:30.186 [job3] 00:12:30.186 filename=/dev/nvme0n4 00:12:30.186 Could not set queue depth (nvme0n1) 00:12:30.186 Could not set queue depth (nvme0n2) 00:12:30.186 Could not set queue depth (nvme0n3) 00:12:30.186 Could not set queue depth (nvme0n4) 00:12:30.444 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.444 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.444 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.444 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.444 fio-3.35 00:12:30.444 Starting 4 threads 00:12:31.820 00:12:31.820 job0: (groupid=0, jobs=1): err= 0: pid=77119: Fri Jul 12 14:52:10 2024 00:12:31.820 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:31.820 slat (nsec): min=12766, max=56909, avg=20594.46, stdev=4595.46 00:12:31.820 clat (usec): min=248, max=2101, avg=542.68, stdev=117.06 00:12:31.820 lat (usec): min=279, max=2135, avg=563.27, stdev=118.46 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 277], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 478], 00:12:31.820 | 30.00th=[ 486], 40.00th=[ 498], 50.00th=[ 519], 60.00th=[ 537], 00:12:31.820 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 725], 95.00th=[ 750], 00:12:31.820 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 2114], 00:12:31.820 | 99.99th=[ 2114] 00:12:31.820 write: IOPS=1106, BW=4428KiB/s (4534kB/s)(4432KiB/1001msec); 0 zone resets 00:12:31.820 slat (usec): min=17, max=146, avg=35.50, stdev=11.64 00:12:31.820 clat (usec): min=133, max=612, avg=341.13, stdev=62.48 00:12:31.820 lat (usec): min=162, max=653, avg=376.63, stdev=66.51 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 245], 20.00th=[ 289], 00:12:31.820 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 363], 00:12:31.820 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 424], 00:12:31.820 | 99.00th=[ 465], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 611], 00:12:31.820 | 99.99th=[ 611] 00:12:31.820 bw ( KiB/s): min= 4096, max= 4096, per=15.57%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.820 lat (usec) : 250=5.63%, 500=65.71%, 750=26.27%, 1000=2.35% 00:12:31.820 lat (msec) : 4=0.05% 00:12:31.820 cpu : usr=1.00%, sys=5.00%, ctx=2133, majf=0, minf=13 00:12:31.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 issued rwts: total=1024,1108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.820 job1: (groupid=0, jobs=1): err= 0: pid=77120: Fri Jul 12 14:52:10 2024 00:12:31.820 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:31.820 slat (nsec): min=17019, max=71970, avg=26669.00, stdev=7426.62 00:12:31.820 clat (usec): min=147, max=428, avg=172.75, stdev=19.84 00:12:31.820 lat (usec): min=166, max=453, avg=199.42, stdev=21.00 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:12:31.820 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:12:31.820 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 204], 00:12:31.820 | 99.00th=[ 243], 99.50th=[ 277], 99.90th=[ 379], 99.95th=[ 408], 00:12:31.820 | 99.99th=[ 429] 00:12:31.820 write: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:12:31.820 slat (usec): min=23, max=138, avg=37.43, stdev=10.31 00:12:31.820 clat (usec): min=103, max=266, avg=129.74, stdev=15.08 00:12:31.820 lat (usec): min=128, max=404, avg=167.17, stdev=18.80 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 119], 00:12:31.820 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:12:31.820 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 161], 00:12:31.820 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 243], 00:12:31.820 | 99.99th=[ 269] 00:12:31.820 bw ( KiB/s): min=12288, max=12288, per=46.71%, avg=12288.00, stdev= 0.00, samples=1 00:12:31.820 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:31.820 lat (usec) : 250=99.65%, 500=0.35% 00:12:31.820 cpu : usr=3.40%, sys=13.20%, ctx=5390, majf=0, minf=11 00:12:31.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 issued rwts: total=2560,2830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.820 job2: (groupid=0, jobs=1): err= 0: pid=77121: Fri Jul 12 14:52:10 2024 00:12:31.820 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:31.820 slat (nsec): min=12810, max=66301, avg=21185.62, stdev=5291.26 00:12:31.820 clat (usec): min=237, max=2029, avg=542.32, stdev=115.26 00:12:31.820 lat (usec): min=277, max=2052, avg=563.51, stdev=116.34 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 281], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 474], 00:12:31.820 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 537], 00:12:31.820 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 725], 95.00th=[ 750], 00:12:31.820 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 2024], 00:12:31.820 | 99.99th=[ 2024] 00:12:31.820 write: IOPS=1107, BW=4432KiB/s (4538kB/s)(4436KiB/1001msec); 0 zone resets 00:12:31.820 slat (usec): min=16, max=102, avg=34.98, stdev= 9.09 00:12:31.820 clat (usec): min=107, max=849, avg=341.14, stdev=66.74 00:12:31.820 lat (usec): min=138, max=886, avg=376.12, stdev=68.40 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 281], 00:12:31.820 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 367], 00:12:31.820 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 420], 95.00th=[ 429], 00:12:31.820 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 635], 99.95th=[ 848], 00:12:31.820 | 99.99th=[ 848] 00:12:31.820 bw ( KiB/s): min= 4096, max= 4096, per=15.57%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.820 lat (usec) : 250=5.77%, 500=65.40%, 750=26.58%, 1000=2.20% 00:12:31.820 lat (msec) : 4=0.05% 00:12:31.820 cpu : usr=0.80%, sys=5.30%, ctx=2134, majf=0, minf=11 00:12:31.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.820 issued rwts: total=1024,1109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.820 job3: (groupid=0, jobs=1): err= 0: pid=77122: Fri Jul 12 14:52:10 2024 00:12:31.820 read: IOPS=1260, BW=5043KiB/s (5164kB/s)(5048KiB/1001msec) 00:12:31.820 slat (nsec): min=14840, max=85055, avg=27871.79, stdev=10108.18 00:12:31.820 clat (usec): min=172, max=626, avg=365.23, stdev=89.43 00:12:31.820 lat (usec): min=197, max=663, avg=393.10, stdev=96.09 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:12:31.820 | 30.00th=[ 289], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 379], 00:12:31.820 | 70.00th=[ 424], 80.00th=[ 453], 90.00th=[ 494], 95.00th=[ 519], 00:12:31.820 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 627], 00:12:31.820 | 99.99th=[ 627] 00:12:31.820 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:31.820 slat (usec): min=22, max=146, avg=41.30, stdev=14.04 00:12:31.820 clat (usec): min=125, max=3154, avg=280.97, stdev=110.65 00:12:31.820 lat (usec): min=160, max=3200, avg=322.27, stdev=115.57 00:12:31.820 clat percentiles (usec): 00:12:31.820 | 1.00th=[ 167], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 210], 00:12:31.820 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 269], 00:12:31.820 | 70.00th=[ 338], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 420], 00:12:31.820 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 906], 99.95th=[ 3163], 00:12:31.820 | 99.99th=[ 3163] 00:12:31.820 bw ( KiB/s): min= 6008, max= 6008, per=22.84%, avg=6008.00, stdev= 0.00, samples=1 00:12:31.820 iops : min= 1502, max= 1502, avg=1502.00, stdev= 0.00, samples=1 00:12:31.820 lat (usec) : 250=31.42%, 500=64.55%, 750=3.97%, 1000=0.04% 00:12:31.820 lat (msec) : 4=0.04% 00:12:31.820 cpu : usr=2.20%, sys=7.30%, ctx=2798, majf=0, minf=10 00:12:31.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.821 issued rwts: total=1262,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.821 00:12:31.821 Run status group 0 (all jobs): 00:12:31.821 READ: bw=22.9MiB/s (24.0MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=22.9MiB (24.0MB), run=1001-1001msec 00:12:31.821 WRITE: bw=25.7MiB/s (26.9MB/s), 4428KiB/s-11.0MiB/s (4534kB/s-11.6MB/s), io=25.7MiB (27.0MB), run=1001-1001msec 00:12:31.821 00:12:31.821 Disk stats (read/write): 00:12:31.821 nvme0n1: ios=846/1024, merge=0/0, ticks=467/363, in_queue=830, util=87.27% 00:12:31.821 nvme0n2: ios=2085/2525, merge=0/0, ticks=409/372, in_queue=781, util=88.43% 00:12:31.821 nvme0n3: ios=796/1024, merge=0/0, ticks=441/355, in_queue=796, util=89.05% 00:12:31.821 nvme0n4: ios=1024/1338, merge=0/0, ticks=390/390, in_queue=780, util=89.61% 00:12:31.821 14:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:31.821 [global] 00:12:31.821 thread=1 00:12:31.821 invalidate=1 00:12:31.821 rw=write 00:12:31.821 time_based=1 00:12:31.821 runtime=1 00:12:31.821 ioengine=libaio 00:12:31.821 direct=1 00:12:31.821 bs=4096 00:12:31.821 iodepth=128 00:12:31.821 norandommap=0 00:12:31.821 numjobs=1 00:12:31.821 00:12:31.821 verify_dump=1 00:12:31.821 verify_backlog=512 00:12:31.821 verify_state_save=0 00:12:31.821 do_verify=1 00:12:31.821 verify=crc32c-intel 00:12:31.821 [job0] 00:12:31.821 filename=/dev/nvme0n1 00:12:31.821 [job1] 00:12:31.821 filename=/dev/nvme0n2 00:12:31.821 [job2] 00:12:31.821 filename=/dev/nvme0n3 00:12:31.821 [job3] 00:12:31.821 filename=/dev/nvme0n4 00:12:31.821 Could not set queue depth (nvme0n1) 00:12:31.821 Could not set queue depth (nvme0n2) 00:12:31.821 Could not set queue depth (nvme0n3) 00:12:31.821 Could not set queue depth (nvme0n4) 00:12:31.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.821 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.821 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.821 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.821 fio-3.35 00:12:31.821 Starting 4 threads 00:12:33.194 00:12:33.194 job0: (groupid=0, jobs=1): err= 0: pid=77176: Fri Jul 12 14:52:11 2024 00:12:33.194 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:12:33.194 slat (usec): min=5, max=6234, avg=86.10, stdev=409.36 00:12:33.194 clat (usec): min=6055, max=19743, avg=11162.00, stdev=2203.20 00:12:33.194 lat (usec): min=6074, max=19758, avg=11248.10, stdev=2230.83 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[ 6980], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9241], 00:12:33.194 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10945], 60.00th=[11469], 00:12:33.194 | 70.00th=[12256], 80.00th=[13304], 90.00th=[13960], 95.00th=[14746], 00:12:33.194 | 99.00th=[17171], 99.50th=[18482], 99.90th=[19792], 99.95th=[19792], 00:12:33.194 | 99.99th=[19792] 00:12:33.194 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1001msec); 0 zone resets 00:12:33.194 slat (usec): min=9, max=5366, avg=80.65, stdev=310.12 00:12:33.194 clat (usec): min=546, max=20767, avg=10999.62, stdev=2074.64 00:12:33.194 lat (usec): min=4512, max=20789, avg=11080.27, stdev=2094.45 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[ 5800], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9503], 00:12:33.194 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10683], 60.00th=[11207], 00:12:33.194 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13829], 95.00th=[14484], 00:12:33.194 | 99.00th=[16909], 99.50th=[18220], 99.90th=[20841], 99.95th=[20841], 00:12:33.194 | 99.99th=[20841] 00:12:33.194 bw ( KiB/s): min=20856, max=20856, per=36.56%, avg=20856.00, stdev= 0.00, samples=1 00:12:33.194 iops : min= 5214, max= 5214, avg=5214.00, stdev= 0.00, samples=1 00:12:33.194 lat (usec) : 750=0.01% 00:12:33.194 lat (msec) : 10=33.90%, 20=66.03%, 50=0.06% 00:12:33.194 cpu : usr=5.20%, sys=16.70%, ctx=780, majf=0, minf=7 00:12:33.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:33.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:33.194 issued rwts: total=5632,5796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:33.194 job1: (groupid=0, jobs=1): err= 0: pid=77177: Fri Jul 12 14:52:11 2024 00:12:33.194 read: IOPS=1905, BW=7623KiB/s (7806kB/s)(7684KiB/1008msec) 00:12:33.194 slat (usec): min=4, max=21181, avg=279.65, stdev=1625.94 00:12:33.194 clat (usec): min=5586, max=68273, avg=32697.56, stdev=13002.80 00:12:33.194 lat (usec): min=12031, max=68289, avg=32977.21, stdev=13021.17 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[12387], 5.00th=[18744], 10.00th=[20055], 20.00th=[21627], 00:12:33.194 | 30.00th=[22676], 40.00th=[23200], 50.00th=[28967], 60.00th=[35390], 00:12:33.194 | 70.00th=[39584], 80.00th=[42730], 90.00th=[52167], 95.00th=[56886], 00:12:33.194 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:12:33.194 | 99.99th=[68682] 00:12:33.194 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:12:33.194 slat (usec): min=15, max=15440, avg=218.11, stdev=1190.62 00:12:33.194 clat (usec): min=12045, max=63220, avg=30689.93, stdev=11559.23 00:12:33.194 lat (usec): min=14547, max=63294, avg=30908.05, stdev=11556.07 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[14615], 5.00th=[16712], 10.00th=[17171], 20.00th=[17695], 00:12:33.194 | 30.00th=[18220], 40.00th=[25822], 50.00th=[34866], 60.00th=[36439], 00:12:33.194 | 70.00th=[37487], 80.00th=[40109], 90.00th=[44303], 95.00th=[45876], 00:12:33.194 | 99.00th=[62653], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:12:33.194 | 99.99th=[63177] 00:12:33.194 bw ( KiB/s): min= 8192, max= 8208, per=14.38%, avg=8200.00, stdev=11.31, samples=2 00:12:33.194 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:12:33.194 lat (msec) : 10=0.03%, 20=22.55%, 50=68.81%, 100=8.62% 00:12:33.194 cpu : usr=1.69%, sys=6.26%, ctx=125, majf=0, minf=15 00:12:33.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:33.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:33.194 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:33.194 job2: (groupid=0, jobs=1): err= 0: pid=77178: Fri Jul 12 14:52:11 2024 00:12:33.194 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:12:33.194 slat (usec): min=4, max=15049, avg=130.16, stdev=712.86 00:12:33.194 clat (usec): min=9390, max=54255, avg=16295.49, stdev=6870.54 00:12:33.194 lat (usec): min=9416, max=54301, avg=16425.65, stdev=6954.34 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11600], 20.00th=[12780], 00:12:33.194 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:12:33.194 | 70.00th=[15008], 80.00th=[16581], 90.00th=[30540], 95.00th=[31327], 00:12:33.194 | 99.00th=[44827], 99.50th=[47973], 99.90th=[50594], 99.95th=[54264], 00:12:33.194 | 99.99th=[54264] 00:12:33.194 write: IOPS=3962, BW=15.5MiB/s (16.2MB/s)(15.7MiB/1014msec); 0 zone resets 00:12:33.194 slat (usec): min=5, max=8000, avg=125.11, stdev=537.57 00:12:33.194 clat (usec): min=8975, max=61890, avg=17340.07, stdev=9652.43 00:12:33.194 lat (usec): min=9093, max=62110, avg=17465.18, stdev=9698.07 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11338], 20.00th=[12518], 00:12:33.194 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[14222], 00:12:33.194 | 70.00th=[14746], 80.00th=[18744], 90.00th=[30016], 95.00th=[39060], 00:12:33.194 | 99.00th=[55837], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:12:33.194 | 99.99th=[62129] 00:12:33.194 bw ( KiB/s): min=10704, max=20464, per=27.32%, avg=15584.00, stdev=6901.36, samples=2 00:12:33.194 iops : min= 2676, max= 5116, avg=3896.00, stdev=1725.34, samples=2 00:12:33.194 lat (msec) : 10=1.14%, 20=82.14%, 50=15.10%, 100=1.62% 00:12:33.194 cpu : usr=3.06%, sys=11.65%, ctx=540, majf=0, minf=8 00:12:33.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:33.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:33.194 issued rwts: total=3584,4018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:33.194 job3: (groupid=0, jobs=1): err= 0: pid=77179: Fri Jul 12 14:52:11 2024 00:12:33.194 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:12:33.194 slat (usec): min=4, max=16890, avg=189.95, stdev=978.21 00:12:33.194 clat (usec): min=12774, max=56597, avg=24836.47, stdev=9163.50 00:12:33.194 lat (usec): min=12815, max=57551, avg=25026.42, stdev=9232.57 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[13042], 5.00th=[16057], 10.00th=[16909], 20.00th=[18220], 00:12:33.194 | 30.00th=[18744], 40.00th=[19006], 50.00th=[20579], 60.00th=[23200], 00:12:33.194 | 70.00th=[27919], 80.00th=[31851], 90.00th=[39584], 95.00th=[43779], 00:12:33.194 | 99.00th=[52691], 99.50th=[52691], 99.90th=[56361], 99.95th=[56361], 00:12:33.194 | 99.99th=[56361] 00:12:33.194 write: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1014msec); 0 zone resets 00:12:33.194 slat (usec): min=11, max=8383, avg=190.85, stdev=814.67 00:12:33.194 clat (usec): min=8604, max=53310, avg=24823.31, stdev=8551.89 00:12:33.194 lat (usec): min=12114, max=56560, avg=25014.16, stdev=8622.06 00:12:33.194 clat percentiles (usec): 00:12:33.194 | 1.00th=[12387], 5.00th=[14746], 10.00th=[15401], 20.00th=[16188], 00:12:33.195 | 30.00th=[17695], 40.00th=[19792], 50.00th=[24773], 60.00th=[28443], 00:12:33.195 | 70.00th=[29492], 80.00th=[31589], 90.00th=[35390], 95.00th=[38536], 00:12:33.195 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:12:33.195 | 99.99th=[53216] 00:12:33.195 bw ( KiB/s): min= 8192, max=12312, per=17.97%, avg=10252.00, stdev=2913.28, samples=2 00:12:33.195 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:12:33.195 lat (msec) : 10=0.02%, 20=44.55%, 50=53.59%, 100=1.84% 00:12:33.195 cpu : usr=2.17%, sys=8.88%, ctx=411, majf=0, minf=9 00:12:33.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:33.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:33.195 issued rwts: total=2560,2598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:33.195 00:12:33.195 Run status group 0 (all jobs): 00:12:33.195 READ: bw=52.8MiB/s (55.3MB/s), 7623KiB/s-22.0MiB/s (7806kB/s-23.0MB/s), io=53.5MiB (56.1MB), run=1001-1014msec 00:12:33.195 WRITE: bw=55.7MiB/s (58.4MB/s), 8127KiB/s-22.6MiB/s (8322kB/s-23.7MB/s), io=56.5MiB (59.2MB), run=1001-1014msec 00:12:33.195 00:12:33.195 Disk stats (read/write): 00:12:33.195 nvme0n1: ios=4657/4855, merge=0/0, ticks=25173/23777, in_queue=48950, util=87.25% 00:12:33.195 nvme0n2: ios=1558/1920, merge=0/0, ticks=12639/12529, in_queue=25168, util=87.84% 00:12:33.195 nvme0n3: ios=3348/3584, merge=0/0, ticks=16428/16007, in_queue=32435, util=89.04% 00:12:33.195 nvme0n4: ios=2048/2509, merge=0/0, ticks=20533/26366, in_queue=46899, util=89.60% 00:12:33.195 14:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:33.195 [global] 00:12:33.195 thread=1 00:12:33.195 invalidate=1 00:12:33.195 rw=randwrite 00:12:33.195 time_based=1 00:12:33.195 runtime=1 00:12:33.195 ioengine=libaio 00:12:33.195 direct=1 00:12:33.195 bs=4096 00:12:33.195 iodepth=128 00:12:33.195 norandommap=0 00:12:33.195 numjobs=1 00:12:33.195 00:12:33.195 verify_dump=1 00:12:33.195 verify_backlog=512 00:12:33.195 verify_state_save=0 00:12:33.195 do_verify=1 00:12:33.195 verify=crc32c-intel 00:12:33.195 [job0] 00:12:33.195 filename=/dev/nvme0n1 00:12:33.195 [job1] 00:12:33.195 filename=/dev/nvme0n2 00:12:33.195 [job2] 00:12:33.195 filename=/dev/nvme0n3 00:12:33.195 [job3] 00:12:33.195 filename=/dev/nvme0n4 00:12:33.195 Could not set queue depth (nvme0n1) 00:12:33.195 Could not set queue depth (nvme0n2) 00:12:33.195 Could not set queue depth (nvme0n3) 00:12:33.195 Could not set queue depth (nvme0n4) 00:12:33.195 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.195 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.195 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.195 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.195 fio-3.35 00:12:33.195 Starting 4 threads 00:12:34.566 00:12:34.566 job0: (groupid=0, jobs=1): err= 0: pid=77238: Fri Jul 12 14:52:12 2024 00:12:34.566 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:12:34.566 slat (usec): min=4, max=8836, avg=76.14, stdev=477.26 00:12:34.566 clat (usec): min=4247, max=18415, avg=10149.51, stdev=2272.01 00:12:34.566 lat (usec): min=4261, max=18457, avg=10225.65, stdev=2297.98 00:12:34.566 clat percentiles (usec): 00:12:34.566 | 1.00th=[ 5276], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8717], 00:12:34.566 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9896], 00:12:34.566 | 70.00th=[10552], 80.00th=[11338], 90.00th=[13042], 95.00th=[15270], 00:12:34.566 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:12:34.566 | 99.99th=[18482] 00:12:34.566 write: IOPS=6659, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1005msec); 0 zone resets 00:12:34.566 slat (usec): min=4, max=7478, avg=66.27, stdev=370.93 00:12:34.566 clat (usec): min=3045, max=18342, avg=8924.50, stdev=1700.69 00:12:34.566 lat (usec): min=3517, max=18417, avg=8990.77, stdev=1739.92 00:12:34.566 clat percentiles (usec): 00:12:34.566 | 1.00th=[ 4047], 5.00th=[ 5145], 10.00th=[ 6456], 20.00th=[ 7898], 00:12:34.566 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:12:34.566 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10421], 95.00th=[10552], 00:12:34.566 | 99.00th=[11600], 99.50th=[12649], 99.90th=[17957], 99.95th=[18220], 00:12:34.566 | 99.99th=[18220] 00:12:34.566 bw ( KiB/s): min=24848, max=28400, per=52.78%, avg=26624.00, stdev=2511.64, samples=2 00:12:34.566 iops : min= 6212, max= 7100, avg=6656.00, stdev=627.91, samples=2 00:12:34.566 lat (msec) : 4=0.43%, 10=64.81%, 20=34.77% 00:12:34.566 cpu : usr=6.77%, sys=15.04%, ctx=844, majf=0, minf=13 00:12:34.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:34.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.566 issued rwts: total=6656,6693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.566 job1: (groupid=0, jobs=1): err= 0: pid=77239: Fri Jul 12 14:52:12 2024 00:12:34.566 read: IOPS=1394, BW=5576KiB/s (5710kB/s)(5604KiB/1005msec) 00:12:34.566 slat (usec): min=3, max=12960, avg=257.32, stdev=1278.08 00:12:34.566 clat (usec): min=3096, max=95620, avg=32096.18, stdev=15798.18 00:12:34.566 lat (usec): min=6775, max=95633, avg=32353.50, stdev=15892.36 00:12:34.566 clat percentiles (usec): 00:12:34.566 | 1.00th=[ 7046], 5.00th=[18482], 10.00th=[20317], 20.00th=[22414], 00:12:34.566 | 30.00th=[22938], 40.00th=[23200], 50.00th=[26608], 60.00th=[32637], 00:12:34.566 | 70.00th=[35914], 80.00th=[41157], 90.00th=[45351], 95.00th=[63701], 00:12:34.566 | 99.00th=[92799], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:12:34.566 | 99.99th=[95945] 00:12:34.566 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:12:34.566 slat (usec): min=5, max=24641, avg=408.88, stdev=1850.19 00:12:34.566 clat (usec): min=25400, max=91070, avg=52313.02, stdev=16560.96 00:12:34.566 lat (usec): min=25424, max=91097, avg=52721.90, stdev=16679.13 00:12:34.566 clat percentiles (usec): 00:12:34.566 | 1.00th=[27919], 5.00th=[32113], 10.00th=[34866], 20.00th=[39060], 00:12:34.566 | 30.00th=[43779], 40.00th=[45351], 50.00th=[47973], 60.00th=[48497], 00:12:34.566 | 70.00th=[52691], 80.00th=[70779], 90.00th=[83362], 95.00th=[85459], 00:12:34.566 | 99.00th=[87557], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:12:34.566 | 99.99th=[90702] 00:12:34.566 bw ( KiB/s): min= 5248, max= 7040, per=12.18%, avg=6144.00, stdev=1267.14, samples=2 00:12:34.566 iops : min= 1312, max= 1760, avg=1536.00, stdev=316.78, samples=2 00:12:34.566 lat (msec) : 4=0.03%, 10=1.26%, 20=3.47%, 50=73.58%, 100=21.65% 00:12:34.566 cpu : usr=1.79%, sys=3.88%, ctx=438, majf=0, minf=17 00:12:34.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:12:34.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.566 issued rwts: total=1401,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.566 job2: (groupid=0, jobs=1): err= 0: pid=77240: Fri Jul 12 14:52:12 2024 00:12:34.566 read: IOPS=2520, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:12:34.566 slat (usec): min=4, max=18655, avg=159.28, stdev=1103.03 00:12:34.566 clat (usec): min=5289, max=57745, avg=19116.72, stdev=9507.17 00:12:34.566 lat (usec): min=5302, max=57763, avg=19276.00, stdev=9596.56 00:12:34.566 clat percentiles (usec): 00:12:34.566 | 1.00th=[ 6783], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[11994], 00:12:34.566 | 30.00th=[12649], 40.00th=[13304], 50.00th=[15926], 60.00th=[17957], 00:12:34.566 | 70.00th=[21103], 80.00th=[24773], 90.00th=[33817], 95.00th=[39060], 00:12:34.566 | 99.00th=[53740], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:12:34.566 | 99.99th=[57934] 00:12:34.566 write: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec); 0 zone resets 00:12:34.566 slat (usec): min=4, max=27867, avg=185.39, stdev=1056.05 00:12:34.566 clat (usec): min=3597, max=89084, avg=25990.65, stdev=15984.81 00:12:34.566 lat (usec): min=3652, max=89096, avg=26176.04, stdev=16085.80 00:12:34.566 clat percentiles (usec): 00:12:34.567 | 1.00th=[ 5800], 5.00th=[10159], 10.00th=[12125], 20.00th=[14615], 00:12:34.567 | 30.00th=[21103], 40.00th=[22414], 50.00th=[22676], 60.00th=[23200], 00:12:34.567 | 70.00th=[24511], 80.00th=[27919], 90.00th=[46400], 95.00th=[67634], 00:12:34.567 | 99.00th=[84411], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:12:34.567 | 99.99th=[88605] 00:12:34.567 bw ( KiB/s): min=11440, max=12168, per=23.40%, avg=11804.00, stdev=514.77, samples=2 00:12:34.567 iops : min= 2860, max= 3042, avg=2951.00, stdev=128.69, samples=2 00:12:34.567 lat (msec) : 4=0.12%, 10=5.16%, 20=37.14%, 50=52.06%, 100=5.52% 00:12:34.567 cpu : usr=3.24%, sys=6.78%, ctx=349, majf=0, minf=9 00:12:34.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:34.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.567 issued rwts: total=2566,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.567 job3: (groupid=0, jobs=1): err= 0: pid=77241: Fri Jul 12 14:52:12 2024 00:12:34.567 read: IOPS=1428, BW=5716KiB/s (5853kB/s)(5756KiB/1007msec) 00:12:34.567 slat (usec): min=3, max=30857, avg=274.45, stdev=1481.70 00:12:34.567 clat (usec): min=5406, max=71784, avg=33050.83, stdev=13129.90 00:12:34.567 lat (usec): min=10999, max=71798, avg=33325.27, stdev=13201.65 00:12:34.567 clat percentiles (usec): 00:12:34.567 | 1.00th=[11207], 5.00th=[18220], 10.00th=[22152], 20.00th=[22676], 00:12:34.567 | 30.00th=[23200], 40.00th=[23725], 50.00th=[29492], 60.00th=[34866], 00:12:34.567 | 70.00th=[39060], 80.00th=[41157], 90.00th=[50594], 95.00th=[57934], 00:12:34.567 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:12:34.567 | 99.99th=[71828] 00:12:34.567 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:12:34.567 slat (usec): min=6, max=23602, avg=385.94, stdev=1867.44 00:12:34.567 clat (usec): min=12273, max=90228, avg=52015.20, stdev=19399.58 00:12:34.567 lat (usec): min=12313, max=90257, avg=52401.14, stdev=19548.41 00:12:34.567 clat percentiles (usec): 00:12:34.567 | 1.00th=[12518], 5.00th=[18744], 10.00th=[32900], 20.00th=[38011], 00:12:34.567 | 30.00th=[43779], 40.00th=[45351], 50.00th=[47449], 60.00th=[48497], 00:12:34.567 | 70.00th=[53740], 80.00th=[74974], 90.00th=[83362], 95.00th=[86508], 00:12:34.567 | 99.00th=[87557], 99.50th=[88605], 99.90th=[89654], 99.95th=[90702], 00:12:34.567 | 99.99th=[90702] 00:12:34.567 bw ( KiB/s): min= 4624, max= 7664, per=12.18%, avg=6144.00, stdev=2149.60, samples=2 00:12:34.567 iops : min= 1156, max= 1916, avg=1536.00, stdev=537.40, samples=2 00:12:34.567 lat (msec) : 10=0.03%, 20=5.95%, 50=72.30%, 100=21.71% 00:12:34.567 cpu : usr=1.49%, sys=4.08%, ctx=435, majf=0, minf=5 00:12:34.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:12:34.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.567 issued rwts: total=1439,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.567 00:12:34.567 Run status group 0 (all jobs): 00:12:34.567 READ: bw=46.3MiB/s (48.5MB/s), 5576KiB/s-25.9MiB/s (5710kB/s-27.1MB/s), io=47.1MiB (49.4MB), run=1005-1018msec 00:12:34.567 WRITE: bw=49.3MiB/s (51.7MB/s), 6101KiB/s-26.0MiB/s (6248kB/s-27.3MB/s), io=50.1MiB (52.6MB), run=1005-1018msec 00:12:34.567 00:12:34.567 Disk stats (read/write): 00:12:34.567 nvme0n1: ios=5682/5751, merge=0/0, ticks=52376/48324, in_queue=100700, util=87.46% 00:12:34.567 nvme0n2: ios=1072/1320, merge=0/0, ticks=16915/33496, in_queue=50411, util=88.25% 00:12:34.567 nvme0n3: ios=2299/2560, merge=0/0, ticks=43715/60629, in_queue=104344, util=89.21% 00:12:34.567 nvme0n4: ios=1024/1407, merge=0/0, ticks=17889/33528, in_queue=51417, util=89.46% 00:12:34.567 14:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:34.567 14:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77254 00:12:34.567 14:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:34.567 14:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:34.567 [global] 00:12:34.567 thread=1 00:12:34.567 invalidate=1 00:12:34.567 rw=read 00:12:34.567 time_based=1 00:12:34.567 runtime=10 00:12:34.567 ioengine=libaio 00:12:34.567 direct=1 00:12:34.567 bs=4096 00:12:34.567 iodepth=1 00:12:34.567 norandommap=1 00:12:34.567 numjobs=1 00:12:34.567 00:12:34.567 [job0] 00:12:34.567 filename=/dev/nvme0n1 00:12:34.567 [job1] 00:12:34.567 filename=/dev/nvme0n2 00:12:34.567 [job2] 00:12:34.567 filename=/dev/nvme0n3 00:12:34.567 [job3] 00:12:34.567 filename=/dev/nvme0n4 00:12:34.567 Could not set queue depth (nvme0n1) 00:12:34.567 Could not set queue depth (nvme0n2) 00:12:34.567 Could not set queue depth (nvme0n3) 00:12:34.567 Could not set queue depth (nvme0n4) 00:12:34.567 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.567 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.567 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.567 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.567 fio-3.35 00:12:34.567 Starting 4 threads 00:12:37.843 14:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:37.843 fio: pid=77297, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:37.843 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=56598528, buflen=4096 00:12:37.843 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:38.101 fio: pid=77296, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:38.101 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=60252160, buflen=4096 00:12:38.101 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.101 14:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:38.665 fio: pid=77294, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:38.665 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12095488, buflen=4096 00:12:38.665 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.665 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:38.922 fio: pid=77295, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:38.922 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11022336, buflen=4096 00:12:38.922 00:12:38.922 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77294: Fri Jul 12 14:52:17 2024 00:12:38.922 read: IOPS=5115, BW=20.0MiB/s (21.0MB/s)(75.5MiB/3780msec) 00:12:38.922 slat (usec): min=13, max=13592, avg=20.86, stdev=170.97 00:12:38.922 clat (usec): min=3, max=4060, avg=172.91, stdev=42.16 00:12:38.922 lat (usec): min=152, max=13817, avg=193.78, stdev=177.67 00:12:38.922 clat percentiles (usec): 00:12:38.922 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:38.922 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:12:38.922 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 225], 00:12:38.922 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 404], 99.95th=[ 594], 00:12:38.922 | 99.99th=[ 2212] 00:12:38.923 bw ( KiB/s): min=18251, max=21920, per=31.51%, avg=20506.71, stdev=1265.13, samples=7 00:12:38.923 iops : min= 4562, max= 5480, avg=5126.57, stdev=316.51, samples=7 00:12:38.923 lat (usec) : 4=0.01%, 10=0.01%, 100=0.01%, 250=98.79%, 500=1.13% 00:12:38.923 lat (usec) : 750=0.03%, 1000=0.02% 00:12:38.923 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:12:38.923 cpu : usr=1.80%, sys=7.46%, ctx=19351, majf=0, minf=1 00:12:38.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 issued rwts: total=19338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.923 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77295: Fri Jul 12 14:52:17 2024 00:12:38.923 read: IOPS=4635, BW=18.1MiB/s (19.0MB/s)(74.5MiB/4115msec) 00:12:38.923 slat (usec): min=13, max=12819, avg=21.90, stdev=197.29 00:12:38.923 clat (usec): min=112, max=3507, avg=192.21, stdev=51.13 00:12:38.923 lat (usec): min=146, max=13002, avg=214.12, stdev=205.29 00:12:38.923 clat percentiles (usec): 00:12:38.923 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:12:38.923 | 30.00th=[ 167], 40.00th=[ 182], 50.00th=[ 198], 60.00th=[ 204], 00:12:38.923 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:12:38.923 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 502], 99.95th=[ 725], 00:12:38.923 | 99.99th=[ 2278] 00:12:38.923 bw ( KiB/s): min=17200, max=22080, per=28.65%, avg=18641.86, stdev=1930.18, samples=7 00:12:38.923 iops : min= 4300, max= 5520, avg=4660.43, stdev=482.55, samples=7 00:12:38.923 lat (usec) : 250=98.36%, 500=1.53%, 750=0.06%, 1000=0.01% 00:12:38.923 lat (msec) : 2=0.01%, 4=0.03% 00:12:38.923 cpu : usr=1.53%, sys=6.85%, ctx=19113, majf=0, minf=1 00:12:38.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 issued rwts: total=19076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.923 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77296: Fri Jul 12 14:52:17 2024 00:12:38.923 read: IOPS=4408, BW=17.2MiB/s (18.1MB/s)(57.5MiB/3337msec) 00:12:38.923 slat (usec): min=12, max=9768, avg=20.58, stdev=102.38 00:12:38.923 clat (usec): min=155, max=3591, avg=204.29, stdev=52.19 00:12:38.923 lat (usec): min=168, max=10024, avg=224.86, stdev=115.91 00:12:38.923 clat percentiles (usec): 00:12:38.923 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:12:38.923 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:12:38.923 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 269], 00:12:38.923 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 545], 99.95th=[ 1057], 00:12:38.923 | 99.99th=[ 2114] 00:12:38.923 bw ( KiB/s): min=15024, max=19872, per=27.37%, avg=17812.00, stdev=1720.62, samples=6 00:12:38.923 iops : min= 3756, max= 4968, avg=4453.00, stdev=430.15, samples=6 00:12:38.923 lat (usec) : 250=91.96%, 500=7.93%, 750=0.03%, 1000=0.02% 00:12:38.923 lat (msec) : 2=0.05%, 4=0.01% 00:12:38.923 cpu : usr=2.34%, sys=6.80%, ctx=14714, majf=0, minf=1 00:12:38.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 issued rwts: total=14711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.923 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77297: Fri Jul 12 14:52:17 2024 00:12:38.923 read: IOPS=4560, BW=17.8MiB/s (18.7MB/s)(54.0MiB/3030msec) 00:12:38.923 slat (nsec): min=13971, max=91890, avg=24672.30, stdev=7058.42 00:12:38.923 clat (usec): min=155, max=3066, avg=192.19, stdev=49.60 00:12:38.923 lat (usec): min=170, max=3102, avg=216.86, stdev=50.45 00:12:38.923 clat percentiles (usec): 00:12:38.923 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:12:38.923 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:12:38.923 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 239], 00:12:38.923 | 99.00th=[ 289], 99.50th=[ 343], 99.90th=[ 586], 99.95th=[ 758], 00:12:38.923 | 99.99th=[ 2900] 00:12:38.923 bw ( KiB/s): min=17200, max=19752, per=28.07%, avg=18264.00, stdev=912.22, samples=6 00:12:38.923 iops : min= 4300, max= 4938, avg=4566.00, stdev=228.06, samples=6 00:12:38.923 lat (usec) : 250=96.20%, 500=3.57%, 750=0.17%, 1000=0.02% 00:12:38.923 lat (msec) : 2=0.01%, 4=0.02% 00:12:38.923 cpu : usr=2.05%, sys=9.48%, ctx=13820, majf=0, minf=1 00:12:38.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.923 issued rwts: total=13819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.923 00:12:38.923 Run status group 0 (all jobs): 00:12:38.923 READ: bw=63.5MiB/s (66.6MB/s), 17.2MiB/s-20.0MiB/s (18.1MB/s-21.0MB/s), io=261MiB (274MB), run=3030-4115msec 00:12:38.923 00:12:38.923 Disk stats (read/write): 00:12:38.923 nvme0n1: ios=18491/0, merge=0/0, ticks=3269/0, in_queue=3269, util=95.59% 00:12:38.923 nvme0n2: ios=18059/0, merge=0/0, ticks=3502/0, in_queue=3502, util=95.46% 00:12:38.923 nvme0n3: ios=13793/0, merge=0/0, ticks=2858/0, in_queue=2858, util=96.37% 00:12:38.923 nvme0n4: ios=13065/0, merge=0/0, ticks=2593/0, in_queue=2593, util=96.70% 00:12:38.923 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.923 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:39.181 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.181 14:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:39.439 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.439 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:40.004 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.004 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:40.261 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.261 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77254 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.520 nvmf hotplug test: fio failed as expected 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:40.520 14:52:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.777 rmmod nvme_tcp 00:12:40.777 rmmod nvme_fabrics 00:12:40.777 rmmod nvme_keyring 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76755 ']' 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76755 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76755 ']' 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76755 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:40.777 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76755 00:12:40.778 killing process with pid 76755 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76755' 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76755 00:12:40.778 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76755 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.035 14:52:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:41.035 00:12:41.035 real 0m20.730s 00:12:41.035 user 1m19.522s 00:12:41.035 sys 0m9.975s 00:12:41.035 ************************************ 00:12:41.035 END TEST nvmf_fio_target 00:12:41.036 ************************************ 00:12:41.036 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.036 14:52:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.036 14:52:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.036 14:52:19 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:41.036 14:52:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.036 14:52:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.036 14:52:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.036 ************************************ 00:12:41.036 START TEST nvmf_bdevio 00:12:41.036 ************************************ 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:41.036 * Looking for test storage... 00:12:41.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.036 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:41.294 Cannot find device "nvmf_tgt_br" 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.294 Cannot find device "nvmf_tgt_br2" 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:41.294 Cannot find device "nvmf_tgt_br" 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:41.294 Cannot find device "nvmf_tgt_br2" 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.294 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.552 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:41.553 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:41.553 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.553 14:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:41.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:41.553 00:12:41.553 --- 10.0.0.2 ping statistics --- 00:12:41.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.553 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:41.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:41.553 00:12:41.553 --- 10.0.0.3 ping statistics --- 00:12:41.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.553 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:12:41.553 00:12:41.553 --- 10.0.0.1 ping statistics --- 00:12:41.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.553 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77626 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77626 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77626 ']' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.553 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:41.553 [2024-07-12 14:52:20.119225] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:12:41.553 [2024-07-12 14:52:20.119316] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.811 [2024-07-12 14:52:20.259876] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.811 [2024-07-12 14:52:20.334177] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.811 [2024-07-12 14:52:20.334241] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.811 [2024-07-12 14:52:20.334255] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.811 [2024-07-12 14:52:20.334265] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.811 [2024-07-12 14:52:20.334274] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.811 [2024-07-12 14:52:20.334460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:41.811 [2024-07-12 14:52:20.334552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:41.811 [2024-07-12 14:52:20.334593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:41.811 [2024-07-12 14:52:20.334596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.811 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.811 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:41.811 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.811 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.811 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 [2024-07-12 14:52:20.483294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 Malloc0 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 [2024-07-12 14:52:20.549775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:42.069 { 00:12:42.069 "params": { 00:12:42.069 "name": "Nvme$subsystem", 00:12:42.069 "trtype": "$TEST_TRANSPORT", 00:12:42.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.069 "adrfam": "ipv4", 00:12:42.069 "trsvcid": "$NVMF_PORT", 00:12:42.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.069 "hdgst": ${hdgst:-false}, 00:12:42.069 "ddgst": ${ddgst:-false} 00:12:42.069 }, 00:12:42.069 "method": "bdev_nvme_attach_controller" 00:12:42.069 } 00:12:42.069 EOF 00:12:42.069 )") 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:42.069 14:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:42.069 "params": { 00:12:42.069 "name": "Nvme1", 00:12:42.069 "trtype": "tcp", 00:12:42.069 "traddr": "10.0.0.2", 00:12:42.069 "adrfam": "ipv4", 00:12:42.069 "trsvcid": "4420", 00:12:42.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.069 "hdgst": false, 00:12:42.069 "ddgst": false 00:12:42.069 }, 00:12:42.069 "method": "bdev_nvme_attach_controller" 00:12:42.069 }' 00:12:42.069 [2024-07-12 14:52:20.613776] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:12:42.069 [2024-07-12 14:52:20.613886] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77666 ] 00:12:42.327 [2024-07-12 14:52:20.757511] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:42.327 [2024-07-12 14:52:20.836469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.327 [2024-07-12 14:52:20.836573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.327 [2024-07-12 14:52:20.836578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.585 I/O targets: 00:12:42.585 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:42.585 00:12:42.585 00:12:42.585 CUnit - A unit testing framework for C - Version 2.1-3 00:12:42.585 http://cunit.sourceforge.net/ 00:12:42.585 00:12:42.585 00:12:42.585 Suite: bdevio tests on: Nvme1n1 00:12:42.585 Test: blockdev write read block ...passed 00:12:42.585 Test: blockdev write zeroes read block ...passed 00:12:42.585 Test: blockdev write zeroes read no split ...passed 00:12:42.585 Test: blockdev write zeroes read split ...passed 00:12:42.585 Test: blockdev write zeroes read split partial ...passed 00:12:42.585 Test: blockdev reset ...[2024-07-12 14:52:21.107671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:42.585 [2024-07-12 14:52:21.108537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95b320 (9): Bad file descriptor 00:12:42.585 [2024-07-12 14:52:21.122326] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:42.585 passed 00:12:42.585 Test: blockdev write read 8 blocks ...passed 00:12:42.585 Test: blockdev write read size > 128k ...passed 00:12:42.585 Test: blockdev write read invalid size ...passed 00:12:42.585 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.585 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.585 Test: blockdev write read max offset ...passed 00:12:42.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.843 Test: blockdev writev readv 8 blocks ...passed 00:12:42.843 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.843 Test: blockdev writev readv block ...passed 00:12:42.843 Test: blockdev writev readv size > 128k ...passed 00:12:42.843 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.843 Test: blockdev comparev and writev ...[2024-07-12 14:52:21.296024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.296110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.296148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.296169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.296698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.296749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.296782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.296802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.297328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.297412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.297433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.297949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:42.843 [2024-07-12 14:52:21.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:42.843 passed 00:12:42.843 Test: blockdev nvme passthru rw ...passed 00:12:42.843 Test: blockdev nvme passthru vendor specific ...[2024-07-12 14:52:21.381112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:42.843 [2024-07-12 14:52:21.381196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.381414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:42.843 [2024-07-12 14:52:21.381457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.381673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:42.843 [2024-07-12 14:52:21.381716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:42.843 [2024-07-12 14:52:21.381912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:42.843 [2024-07-12 14:52:21.381955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:42.843 passed 00:12:42.843 Test: blockdev nvme admin passthru ...passed 00:12:42.843 Test: blockdev copy ...passed 00:12:42.843 00:12:42.843 Run Summary: Type Total Ran Passed Failed Inactive 00:12:42.843 suites 1 1 n/a 0 0 00:12:42.843 tests 23 23 23 0 0 00:12:42.843 asserts 152 152 152 0 n/a 00:12:42.843 00:12:42.843 Elapsed time = 0.892 seconds 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.103 rmmod nvme_tcp 00:12:43.103 rmmod nvme_fabrics 00:12:43.103 rmmod nvme_keyring 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77626 ']' 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77626 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77626 ']' 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77626 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.103 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77626 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:43.367 killing process with pid 77626 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77626' 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77626 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77626 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:43.367 ************************************ 00:12:43.367 END TEST nvmf_bdevio 00:12:43.367 ************************************ 00:12:43.367 00:12:43.367 real 0m2.401s 00:12:43.367 user 0m8.175s 00:12:43.367 sys 0m0.637s 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.367 14:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 14:52:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:43.624 14:52:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:43.624 14:52:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:43.624 14:52:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.624 14:52:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 ************************************ 00:12:43.624 START TEST nvmf_auth_target 00:12:43.624 ************************************ 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:43.624 * Looking for test storage... 00:12:43.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.624 14:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:43.625 Cannot find device "nvmf_tgt_br" 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.625 Cannot find device "nvmf_tgt_br2" 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:43.625 Cannot find device "nvmf_tgt_br" 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:43.625 Cannot find device "nvmf_tgt_br2" 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.625 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.881 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:43.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:12:43.882 00:12:43.882 --- 10.0.0.2 ping statistics --- 00:12:43.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.882 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:43.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:43.882 00:12:43.882 --- 10.0.0.3 ping statistics --- 00:12:43.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.882 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:12:43.882 00:12:43.882 --- 10.0.0.1 ping statistics --- 00:12:43.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.882 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.882 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77848 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77848 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77848 ']' 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.140 14:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.092 14:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.092 14:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:45.092 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:45.092 14:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:45.092 14:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77893 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b7a4a18a87035a732355b57f7f41f8975b98f83630e6beee 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b7a4a18a87035a732355b57f7f41f8975b98f83630e6beee 0 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b7a4a18a87035a732355b57f7f41f8975b98f83630e6beee 0 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b7a4a18a87035a732355b57f7f41f8975b98f83630e6beee 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=94284305d3af2bdd900fb8bf50043b0bbc236fa22a1d13c31dbc7bec4dff6932 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rXY 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 94284305d3af2bdd900fb8bf50043b0bbc236fa22a1d13c31dbc7bec4dff6932 3 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 94284305d3af2bdd900fb8bf50043b0bbc236fa22a1d13c31dbc7bec4dff6932 3 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=94284305d3af2bdd900fb8bf50043b0bbc236fa22a1d13c31dbc7bec4dff6932 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rXY 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rXY 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.rXY 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ede408eecd55ca41bafb266954d1e551 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UR1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ede408eecd55ca41bafb266954d1e551 1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ede408eecd55ca41bafb266954d1e551 1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ede408eecd55ca41bafb266954d1e551 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UR1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UR1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.UR1 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3f9354c0ee159b4da7a2ee2001130d504df85d1f2b6208e 00:12:45.351 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eCO 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3f9354c0ee159b4da7a2ee2001130d504df85d1f2b6208e 2 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3f9354c0ee159b4da7a2ee2001130d504df85d1f2b6208e 2 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3f9354c0ee159b4da7a2ee2001130d504df85d1f2b6208e 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:45.352 14:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eCO 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eCO 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.eCO 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6db6b30e7c99f9d1c5a708489bf398226b425a1c4bd86c30 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uw5 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6db6b30e7c99f9d1c5a708489bf398226b425a1c4bd86c30 2 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6db6b30e7c99f9d1c5a708489bf398226b425a1c4bd86c30 2 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6db6b30e7c99f9d1c5a708489bf398226b425a1c4bd86c30 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uw5 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uw5 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.uw5 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b383f81a030193a3fe981a8ec4d0fd6 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UGi 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b383f81a030193a3fe981a8ec4d0fd6 1 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b383f81a030193a3fe981a8ec4d0fd6 1 00:12:45.608 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b383f81a030193a3fe981a8ec4d0fd6 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UGi 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UGi 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.UGi 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f3d6c43feff8e71c2408c389f75cc43889cda7ce450f20c2790e5b472a33d86c 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oOG 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f3d6c43feff8e71c2408c389f75cc43889cda7ce450f20c2790e5b472a33d86c 3 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f3d6c43feff8e71c2408c389f75cc43889cda7ce450f20c2790e5b472a33d86c 3 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f3d6c43feff8e71c2408c389f75cc43889cda7ce450f20c2790e5b472a33d86c 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oOG 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oOG 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.oOG 00:12:45.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77848 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77848 ']' 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.609 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77893 /var/tmp/host.sock 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77893 ']' 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.174 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gXX 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gXX 00:12:46.431 14:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gXX 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.rXY ]] 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rXY 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rXY 00:12:46.688 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rXY 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UR1 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UR1 00:12:47.251 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UR1 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.eCO ]] 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eCO 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eCO 00:12:47.508 14:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eCO 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uw5 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uw5 00:12:47.765 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uw5 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.UGi ]] 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UGi 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UGi 00:12:48.332 14:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UGi 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oOG 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oOG 00:12:48.593 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oOG 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:48.852 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.416 14:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.674 00:12:49.674 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.674 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.674 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.931 { 00:12:49.931 "auth": { 00:12:49.931 "dhgroup": "null", 00:12:49.931 "digest": "sha256", 00:12:49.931 "state": "completed" 00:12:49.931 }, 00:12:49.931 "cntlid": 1, 00:12:49.931 "listen_address": { 00:12:49.931 "adrfam": "IPv4", 00:12:49.931 "traddr": "10.0.0.2", 00:12:49.931 "trsvcid": "4420", 00:12:49.931 "trtype": "TCP" 00:12:49.931 }, 00:12:49.931 "peer_address": { 00:12:49.931 "adrfam": "IPv4", 00:12:49.931 "traddr": "10.0.0.1", 00:12:49.931 "trsvcid": "51184", 00:12:49.931 "trtype": "TCP" 00:12:49.931 }, 00:12:49.931 "qid": 0, 00:12:49.931 "state": "enabled", 00:12:49.931 "thread": "nvmf_tgt_poll_group_000" 00:12:49.931 } 00:12:49.931 ]' 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:49.931 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.189 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.189 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.189 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.446 14:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:55.706 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.963 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.220 00:12:56.220 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.221 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.221 14:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.478 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.478 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.478 14:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.478 14:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.736 { 00:12:56.736 "auth": { 00:12:56.736 "dhgroup": "null", 00:12:56.736 "digest": "sha256", 00:12:56.736 "state": "completed" 00:12:56.736 }, 00:12:56.736 "cntlid": 3, 00:12:56.736 "listen_address": { 00:12:56.736 "adrfam": "IPv4", 00:12:56.736 "traddr": "10.0.0.2", 00:12:56.736 "trsvcid": "4420", 00:12:56.736 "trtype": "TCP" 00:12:56.736 }, 00:12:56.736 "peer_address": { 00:12:56.736 "adrfam": "IPv4", 00:12:56.736 "traddr": "10.0.0.1", 00:12:56.736 "trsvcid": "59042", 00:12:56.736 "trtype": "TCP" 00:12:56.736 }, 00:12:56.736 "qid": 0, 00:12:56.736 "state": "enabled", 00:12:56.736 "thread": "nvmf_tgt_poll_group_000" 00:12:56.736 } 00:12:56.736 ]' 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.736 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.301 14:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:12:57.867 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.867 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:12:57.867 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.867 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.125 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.125 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.125 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.125 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.383 14:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.641 00:12:58.899 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.899 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.899 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.157 { 00:12:59.157 "auth": { 00:12:59.157 "dhgroup": "null", 00:12:59.157 "digest": "sha256", 00:12:59.157 "state": "completed" 00:12:59.157 }, 00:12:59.157 "cntlid": 5, 00:12:59.157 "listen_address": { 00:12:59.157 "adrfam": "IPv4", 00:12:59.157 "traddr": "10.0.0.2", 00:12:59.157 "trsvcid": "4420", 00:12:59.157 "trtype": "TCP" 00:12:59.157 }, 00:12:59.157 "peer_address": { 00:12:59.157 "adrfam": "IPv4", 00:12:59.157 "traddr": "10.0.0.1", 00:12:59.157 "trsvcid": "59070", 00:12:59.157 "trtype": "TCP" 00:12:59.157 }, 00:12:59.157 "qid": 0, 00:12:59.157 "state": "enabled", 00:12:59.157 "thread": "nvmf_tgt_poll_group_000" 00:12:59.157 } 00:12:59.157 ]' 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:59.157 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.415 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.415 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.415 14:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.673 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.606 14:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:00.864 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:00.864 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.865 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:01.431 00:13:01.431 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.431 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.431 14:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.689 { 00:13:01.689 "auth": { 00:13:01.689 "dhgroup": "null", 00:13:01.689 "digest": "sha256", 00:13:01.689 "state": "completed" 00:13:01.689 }, 00:13:01.689 "cntlid": 7, 00:13:01.689 "listen_address": { 00:13:01.689 "adrfam": "IPv4", 00:13:01.689 "traddr": "10.0.0.2", 00:13:01.689 "trsvcid": "4420", 00:13:01.689 "trtype": "TCP" 00:13:01.689 }, 00:13:01.689 "peer_address": { 00:13:01.689 "adrfam": "IPv4", 00:13:01.689 "traddr": "10.0.0.1", 00:13:01.689 "trsvcid": "59098", 00:13:01.689 "trtype": "TCP" 00:13:01.689 }, 00:13:01.689 "qid": 0, 00:13:01.689 "state": "enabled", 00:13:01.689 "thread": "nvmf_tgt_poll_group_000" 00:13:01.689 } 00:13:01.689 ]' 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.689 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.947 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:01.947 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.947 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.947 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.947 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.205 14:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:03.580 14:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.580 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.208 00:13:04.208 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.208 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.208 14:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.466 { 00:13:04.466 "auth": { 00:13:04.466 "dhgroup": "ffdhe2048", 00:13:04.466 "digest": "sha256", 00:13:04.466 "state": "completed" 00:13:04.466 }, 00:13:04.466 "cntlid": 9, 00:13:04.466 "listen_address": { 00:13:04.466 "adrfam": "IPv4", 00:13:04.466 "traddr": "10.0.0.2", 00:13:04.466 "trsvcid": "4420", 00:13:04.466 "trtype": "TCP" 00:13:04.466 }, 00:13:04.466 "peer_address": { 00:13:04.466 "adrfam": "IPv4", 00:13:04.466 "traddr": "10.0.0.1", 00:13:04.466 "trsvcid": "59116", 00:13:04.466 "trtype": "TCP" 00:13:04.466 }, 00:13:04.466 "qid": 0, 00:13:04.466 "state": "enabled", 00:13:04.466 "thread": "nvmf_tgt_poll_group_000" 00:13:04.466 } 00:13:04.466 ]' 00:13:04.466 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.724 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.290 14:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:06.224 14:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.790 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.049 00:13:07.307 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.307 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.307 14:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.565 { 00:13:07.565 "auth": { 00:13:07.565 "dhgroup": "ffdhe2048", 00:13:07.565 "digest": "sha256", 00:13:07.565 "state": "completed" 00:13:07.565 }, 00:13:07.565 "cntlid": 11, 00:13:07.565 "listen_address": { 00:13:07.565 "adrfam": "IPv4", 00:13:07.565 "traddr": "10.0.0.2", 00:13:07.565 "trsvcid": "4420", 00:13:07.565 "trtype": "TCP" 00:13:07.565 }, 00:13:07.565 "peer_address": { 00:13:07.565 "adrfam": "IPv4", 00:13:07.565 "traddr": "10.0.0.1", 00:13:07.565 "trsvcid": "34754", 00:13:07.565 "trtype": "TCP" 00:13:07.565 }, 00:13:07.565 "qid": 0, 00:13:07.565 "state": "enabled", 00:13:07.565 "thread": "nvmf_tgt_poll_group_000" 00:13:07.565 } 00:13:07.565 ]' 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:07.565 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.824 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.824 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.824 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.082 14:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.015 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.274 14:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.922 00:13:09.922 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.922 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.922 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.180 { 00:13:10.180 "auth": { 00:13:10.180 "dhgroup": "ffdhe2048", 00:13:10.180 "digest": "sha256", 00:13:10.180 "state": "completed" 00:13:10.180 }, 00:13:10.180 "cntlid": 13, 00:13:10.180 "listen_address": { 00:13:10.180 "adrfam": "IPv4", 00:13:10.180 "traddr": "10.0.0.2", 00:13:10.180 "trsvcid": "4420", 00:13:10.180 "trtype": "TCP" 00:13:10.180 }, 00:13:10.180 "peer_address": { 00:13:10.180 "adrfam": "IPv4", 00:13:10.180 "traddr": "10.0.0.1", 00:13:10.180 "trsvcid": "34780", 00:13:10.180 "trtype": "TCP" 00:13:10.180 }, 00:13:10.180 "qid": 0, 00:13:10.180 "state": "enabled", 00:13:10.180 "thread": "nvmf_tgt_poll_group_000" 00:13:10.180 } 00:13:10.180 ]' 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:10.180 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.438 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.438 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.438 14:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.695 14:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:11.629 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.196 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.454 00:13:12.454 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.454 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.454 14:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.712 { 00:13:12.712 "auth": { 00:13:12.712 "dhgroup": "ffdhe2048", 00:13:12.712 "digest": "sha256", 00:13:12.712 "state": "completed" 00:13:12.712 }, 00:13:12.712 "cntlid": 15, 00:13:12.712 "listen_address": { 00:13:12.712 "adrfam": "IPv4", 00:13:12.712 "traddr": "10.0.0.2", 00:13:12.712 "trsvcid": "4420", 00:13:12.712 "trtype": "TCP" 00:13:12.712 }, 00:13:12.712 "peer_address": { 00:13:12.712 "adrfam": "IPv4", 00:13:12.712 "traddr": "10.0.0.1", 00:13:12.712 "trsvcid": "34800", 00:13:12.712 "trtype": "TCP" 00:13:12.712 }, 00:13:12.712 "qid": 0, 00:13:12.712 "state": "enabled", 00:13:12.712 "thread": "nvmf_tgt_poll_group_000" 00:13:12.712 } 00:13:12.712 ]' 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:12.712 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.970 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.970 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.970 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.228 14:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:14.162 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.419 14:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.676 00:13:14.676 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.677 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.677 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.242 { 00:13:15.242 "auth": { 00:13:15.242 "dhgroup": "ffdhe3072", 00:13:15.242 "digest": "sha256", 00:13:15.242 "state": "completed" 00:13:15.242 }, 00:13:15.242 "cntlid": 17, 00:13:15.242 "listen_address": { 00:13:15.242 "adrfam": "IPv4", 00:13:15.242 "traddr": "10.0.0.2", 00:13:15.242 "trsvcid": "4420", 00:13:15.242 "trtype": "TCP" 00:13:15.242 }, 00:13:15.242 "peer_address": { 00:13:15.242 "adrfam": "IPv4", 00:13:15.242 "traddr": "10.0.0.1", 00:13:15.242 "trsvcid": "34824", 00:13:15.242 "trtype": "TCP" 00:13:15.242 }, 00:13:15.242 "qid": 0, 00:13:15.242 "state": "enabled", 00:13:15.242 "thread": "nvmf_tgt_poll_group_000" 00:13:15.242 } 00:13:15.242 ]' 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.242 14:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.832 14:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:16.759 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.015 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.272 00:13:17.272 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.272 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.272 14:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.529 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.529 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.529 14:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.529 14:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.786 { 00:13:17.786 "auth": { 00:13:17.786 "dhgroup": "ffdhe3072", 00:13:17.786 "digest": "sha256", 00:13:17.786 "state": "completed" 00:13:17.786 }, 00:13:17.786 "cntlid": 19, 00:13:17.786 "listen_address": { 00:13:17.786 "adrfam": "IPv4", 00:13:17.786 "traddr": "10.0.0.2", 00:13:17.786 "trsvcid": "4420", 00:13:17.786 "trtype": "TCP" 00:13:17.786 }, 00:13:17.786 "peer_address": { 00:13:17.786 "adrfam": "IPv4", 00:13:17.786 "traddr": "10.0.0.1", 00:13:17.786 "trsvcid": "60624", 00:13:17.786 "trtype": "TCP" 00:13:17.786 }, 00:13:17.786 "qid": 0, 00:13:17.786 "state": "enabled", 00:13:17.786 "thread": "nvmf_tgt_poll_group_000" 00:13:17.786 } 00:13:17.786 ]' 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.786 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.356 14:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:18.920 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.484 14:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.740 00:13:19.740 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.740 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.741 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.998 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.998 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.998 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.998 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.265 { 00:13:20.265 "auth": { 00:13:20.265 "dhgroup": "ffdhe3072", 00:13:20.265 "digest": "sha256", 00:13:20.265 "state": "completed" 00:13:20.265 }, 00:13:20.265 "cntlid": 21, 00:13:20.265 "listen_address": { 00:13:20.265 "adrfam": "IPv4", 00:13:20.265 "traddr": "10.0.0.2", 00:13:20.265 "trsvcid": "4420", 00:13:20.265 "trtype": "TCP" 00:13:20.265 }, 00:13:20.265 "peer_address": { 00:13:20.265 "adrfam": "IPv4", 00:13:20.265 "traddr": "10.0.0.1", 00:13:20.265 "trsvcid": "60646", 00:13:20.265 "trtype": "TCP" 00:13:20.265 }, 00:13:20.265 "qid": 0, 00:13:20.265 "state": "enabled", 00:13:20.265 "thread": "nvmf_tgt_poll_group_000" 00:13:20.265 } 00:13:20.265 ]' 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.265 14:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.524 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:21.458 14:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.716 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.974 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.974 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.974 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:22.233 00:13:22.233 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.233 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.233 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.491 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.491 14:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.491 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.491 14:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.491 { 00:13:22.491 "auth": { 00:13:22.491 "dhgroup": "ffdhe3072", 00:13:22.491 "digest": "sha256", 00:13:22.491 "state": "completed" 00:13:22.491 }, 00:13:22.491 "cntlid": 23, 00:13:22.491 "listen_address": { 00:13:22.491 "adrfam": "IPv4", 00:13:22.491 "traddr": "10.0.0.2", 00:13:22.491 "trsvcid": "4420", 00:13:22.491 "trtype": "TCP" 00:13:22.491 }, 00:13:22.491 "peer_address": { 00:13:22.491 "adrfam": "IPv4", 00:13:22.491 "traddr": "10.0.0.1", 00:13:22.491 "trsvcid": "60686", 00:13:22.491 "trtype": "TCP" 00:13:22.491 }, 00:13:22.491 "qid": 0, 00:13:22.491 "state": "enabled", 00:13:22.491 "thread": "nvmf_tgt_poll_group_000" 00:13:22.491 } 00:13:22.491 ]' 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.491 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.057 14:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:23.622 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.880 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.446 00:13:24.446 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.446 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.446 14:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.703 { 00:13:24.703 "auth": { 00:13:24.703 "dhgroup": "ffdhe4096", 00:13:24.703 "digest": "sha256", 00:13:24.703 "state": "completed" 00:13:24.703 }, 00:13:24.703 "cntlid": 25, 00:13:24.703 "listen_address": { 00:13:24.703 "adrfam": "IPv4", 00:13:24.703 "traddr": "10.0.0.2", 00:13:24.703 "trsvcid": "4420", 00:13:24.703 "trtype": "TCP" 00:13:24.703 }, 00:13:24.703 "peer_address": { 00:13:24.703 "adrfam": "IPv4", 00:13:24.703 "traddr": "10.0.0.1", 00:13:24.703 "trsvcid": "60714", 00:13:24.703 "trtype": "TCP" 00:13:24.703 }, 00:13:24.703 "qid": 0, 00:13:24.703 "state": "enabled", 00:13:24.703 "thread": "nvmf_tgt_poll_group_000" 00:13:24.703 } 00:13:24.703 ]' 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.703 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.960 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:24.960 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.960 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.960 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.960 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.218 14:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.151 14:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.718 00:13:26.718 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.718 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.718 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.283 { 00:13:27.283 "auth": { 00:13:27.283 "dhgroup": "ffdhe4096", 00:13:27.283 "digest": "sha256", 00:13:27.283 "state": "completed" 00:13:27.283 }, 00:13:27.283 "cntlid": 27, 00:13:27.283 "listen_address": { 00:13:27.283 "adrfam": "IPv4", 00:13:27.283 "traddr": "10.0.0.2", 00:13:27.283 "trsvcid": "4420", 00:13:27.283 "trtype": "TCP" 00:13:27.283 }, 00:13:27.283 "peer_address": { 00:13:27.283 "adrfam": "IPv4", 00:13:27.283 "traddr": "10.0.0.1", 00:13:27.283 "trsvcid": "45594", 00:13:27.283 "trtype": "TCP" 00:13:27.283 }, 00:13:27.283 "qid": 0, 00:13:27.283 "state": "enabled", 00:13:27.283 "thread": "nvmf_tgt_poll_group_000" 00:13:27.283 } 00:13:27.283 ]' 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.283 14:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.541 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:28.475 14:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.733 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.992 00:13:28.992 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.992 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.992 14:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.561 { 00:13:29.561 "auth": { 00:13:29.561 "dhgroup": "ffdhe4096", 00:13:29.561 "digest": "sha256", 00:13:29.561 "state": "completed" 00:13:29.561 }, 00:13:29.561 "cntlid": 29, 00:13:29.561 "listen_address": { 00:13:29.561 "adrfam": "IPv4", 00:13:29.561 "traddr": "10.0.0.2", 00:13:29.561 "trsvcid": "4420", 00:13:29.561 "trtype": "TCP" 00:13:29.561 }, 00:13:29.561 "peer_address": { 00:13:29.561 "adrfam": "IPv4", 00:13:29.561 "traddr": "10.0.0.1", 00:13:29.561 "trsvcid": "45608", 00:13:29.561 "trtype": "TCP" 00:13:29.561 }, 00:13:29.561 "qid": 0, 00:13:29.561 "state": "enabled", 00:13:29.561 "thread": "nvmf_tgt_poll_group_000" 00:13:29.561 } 00:13:29.561 ]' 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:29.561 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.819 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.819 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.819 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.077 14:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.009 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.268 14:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.834 00:13:31.834 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.834 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.834 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.399 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.399 { 00:13:32.399 "auth": { 00:13:32.399 "dhgroup": "ffdhe4096", 00:13:32.399 "digest": "sha256", 00:13:32.399 "state": "completed" 00:13:32.399 }, 00:13:32.399 "cntlid": 31, 00:13:32.399 "listen_address": { 00:13:32.399 "adrfam": "IPv4", 00:13:32.399 "traddr": "10.0.0.2", 00:13:32.399 "trsvcid": "4420", 00:13:32.399 "trtype": "TCP" 00:13:32.399 }, 00:13:32.399 "peer_address": { 00:13:32.399 "adrfam": "IPv4", 00:13:32.399 "traddr": "10.0.0.1", 00:13:32.400 "trsvcid": "45634", 00:13:32.400 "trtype": "TCP" 00:13:32.400 }, 00:13:32.400 "qid": 0, 00:13:32.400 "state": "enabled", 00:13:32.400 "thread": "nvmf_tgt_poll_group_000" 00:13:32.400 } 00:13:32.400 ]' 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.400 14:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.965 14:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:33.898 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.156 14:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.720 00:13:34.720 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.720 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.720 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.978 { 00:13:34.978 "auth": { 00:13:34.978 "dhgroup": "ffdhe6144", 00:13:34.978 "digest": "sha256", 00:13:34.978 "state": "completed" 00:13:34.978 }, 00:13:34.978 "cntlid": 33, 00:13:34.978 "listen_address": { 00:13:34.978 "adrfam": "IPv4", 00:13:34.978 "traddr": "10.0.0.2", 00:13:34.978 "trsvcid": "4420", 00:13:34.978 "trtype": "TCP" 00:13:34.978 }, 00:13:34.978 "peer_address": { 00:13:34.978 "adrfam": "IPv4", 00:13:34.978 "traddr": "10.0.0.1", 00:13:34.978 "trsvcid": "45656", 00:13:34.978 "trtype": "TCP" 00:13:34.978 }, 00:13:34.978 "qid": 0, 00:13:34.978 "state": "enabled", 00:13:34.978 "thread": "nvmf_tgt_poll_group_000" 00:13:34.978 } 00:13:34.978 ]' 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.978 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.237 14:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.169 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.427 14:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.018 00:13:37.018 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.018 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.018 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.292 { 00:13:37.292 "auth": { 00:13:37.292 "dhgroup": "ffdhe6144", 00:13:37.292 "digest": "sha256", 00:13:37.292 "state": "completed" 00:13:37.292 }, 00:13:37.292 "cntlid": 35, 00:13:37.292 "listen_address": { 00:13:37.292 "adrfam": "IPv4", 00:13:37.292 "traddr": "10.0.0.2", 00:13:37.292 "trsvcid": "4420", 00:13:37.292 "trtype": "TCP" 00:13:37.292 }, 00:13:37.292 "peer_address": { 00:13:37.292 "adrfam": "IPv4", 00:13:37.292 "traddr": "10.0.0.1", 00:13:37.292 "trsvcid": "39124", 00:13:37.292 "trtype": "TCP" 00:13:37.292 }, 00:13:37.292 "qid": 0, 00:13:37.292 "state": "enabled", 00:13:37.292 "thread": "nvmf_tgt_poll_group_000" 00:13:37.292 } 00:13:37.292 ]' 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.292 14:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.551 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:38.486 14:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.745 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.311 00:13:39.311 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.311 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.311 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.569 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.569 14:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.569 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.569 14:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.569 { 00:13:39.569 "auth": { 00:13:39.569 "dhgroup": "ffdhe6144", 00:13:39.569 "digest": "sha256", 00:13:39.569 "state": "completed" 00:13:39.569 }, 00:13:39.569 "cntlid": 37, 00:13:39.569 "listen_address": { 00:13:39.569 "adrfam": "IPv4", 00:13:39.569 "traddr": "10.0.0.2", 00:13:39.569 "trsvcid": "4420", 00:13:39.569 "trtype": "TCP" 00:13:39.569 }, 00:13:39.569 "peer_address": { 00:13:39.569 "adrfam": "IPv4", 00:13:39.569 "traddr": "10.0.0.1", 00:13:39.569 "trsvcid": "39160", 00:13:39.569 "trtype": "TCP" 00:13:39.569 }, 00:13:39.569 "qid": 0, 00:13:39.569 "state": "enabled", 00:13:39.569 "thread": "nvmf_tgt_poll_group_000" 00:13:39.569 } 00:13:39.569 ]' 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.569 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.135 14:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.700 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:40.958 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.525 00:13:41.525 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.525 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.525 14:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.784 { 00:13:41.784 "auth": { 00:13:41.784 "dhgroup": "ffdhe6144", 00:13:41.784 "digest": "sha256", 00:13:41.784 "state": "completed" 00:13:41.784 }, 00:13:41.784 "cntlid": 39, 00:13:41.784 "listen_address": { 00:13:41.784 "adrfam": "IPv4", 00:13:41.784 "traddr": "10.0.0.2", 00:13:41.784 "trsvcid": "4420", 00:13:41.784 "trtype": "TCP" 00:13:41.784 }, 00:13:41.784 "peer_address": { 00:13:41.784 "adrfam": "IPv4", 00:13:41.784 "traddr": "10.0.0.1", 00:13:41.784 "trsvcid": "39186", 00:13:41.784 "trtype": "TCP" 00:13:41.784 }, 00:13:41.784 "qid": 0, 00:13:41.784 "state": "enabled", 00:13:41.784 "thread": "nvmf_tgt_poll_group_000" 00:13:41.784 } 00:13:41.784 ]' 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.784 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:42.042 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.042 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.043 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.333 14:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:42.898 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.898 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:42.899 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.155 14:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.086 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.086 { 00:13:44.086 "auth": { 00:13:44.086 "dhgroup": "ffdhe8192", 00:13:44.086 "digest": "sha256", 00:13:44.086 "state": "completed" 00:13:44.086 }, 00:13:44.086 "cntlid": 41, 00:13:44.086 "listen_address": { 00:13:44.086 "adrfam": "IPv4", 00:13:44.086 "traddr": "10.0.0.2", 00:13:44.086 "trsvcid": "4420", 00:13:44.086 "trtype": "TCP" 00:13:44.086 }, 00:13:44.086 "peer_address": { 00:13:44.086 "adrfam": "IPv4", 00:13:44.086 "traddr": "10.0.0.1", 00:13:44.086 "trsvcid": "39208", 00:13:44.086 "trtype": "TCP" 00:13:44.086 }, 00:13:44.086 "qid": 0, 00:13:44.086 "state": "enabled", 00:13:44.086 "thread": "nvmf_tgt_poll_group_000" 00:13:44.086 } 00:13:44.086 ]' 00:13:44.086 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.342 14:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.599 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:45.165 14:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.730 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.295 00:13:46.295 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.295 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.295 14:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.553 { 00:13:46.553 "auth": { 00:13:46.553 "dhgroup": "ffdhe8192", 00:13:46.553 "digest": "sha256", 00:13:46.553 "state": "completed" 00:13:46.553 }, 00:13:46.553 "cntlid": 43, 00:13:46.553 "listen_address": { 00:13:46.553 "adrfam": "IPv4", 00:13:46.553 "traddr": "10.0.0.2", 00:13:46.553 "trsvcid": "4420", 00:13:46.553 "trtype": "TCP" 00:13:46.553 }, 00:13:46.553 "peer_address": { 00:13:46.553 "adrfam": "IPv4", 00:13:46.553 "traddr": "10.0.0.1", 00:13:46.553 "trsvcid": "39306", 00:13:46.553 "trtype": "TCP" 00:13:46.553 }, 00:13:46.553 "qid": 0, 00:13:46.553 "state": "enabled", 00:13:46.553 "thread": "nvmf_tgt_poll_group_000" 00:13:46.553 } 00:13:46.553 ]' 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.553 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.811 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.811 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.811 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.069 14:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:47.635 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.894 14:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.829 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.829 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.098 { 00:13:49.098 "auth": { 00:13:49.098 "dhgroup": "ffdhe8192", 00:13:49.098 "digest": "sha256", 00:13:49.098 "state": "completed" 00:13:49.098 }, 00:13:49.098 "cntlid": 45, 00:13:49.098 "listen_address": { 00:13:49.098 "adrfam": "IPv4", 00:13:49.098 "traddr": "10.0.0.2", 00:13:49.098 "trsvcid": "4420", 00:13:49.098 "trtype": "TCP" 00:13:49.098 }, 00:13:49.098 "peer_address": { 00:13:49.098 "adrfam": "IPv4", 00:13:49.098 "traddr": "10.0.0.1", 00:13:49.098 "trsvcid": "39332", 00:13:49.098 "trtype": "TCP" 00:13:49.098 }, 00:13:49.098 "qid": 0, 00:13:49.098 "state": "enabled", 00:13:49.098 "thread": "nvmf_tgt_poll_group_000" 00:13:49.098 } 00:13:49.098 ]' 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.098 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.412 14:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.346 14:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.604 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.170 00:13:51.170 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.170 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.170 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.427 { 00:13:51.427 "auth": { 00:13:51.427 "dhgroup": "ffdhe8192", 00:13:51.427 "digest": "sha256", 00:13:51.427 "state": "completed" 00:13:51.427 }, 00:13:51.427 "cntlid": 47, 00:13:51.427 "listen_address": { 00:13:51.427 "adrfam": "IPv4", 00:13:51.427 "traddr": "10.0.0.2", 00:13:51.427 "trsvcid": "4420", 00:13:51.427 "trtype": "TCP" 00:13:51.427 }, 00:13:51.427 "peer_address": { 00:13:51.427 "adrfam": "IPv4", 00:13:51.427 "traddr": "10.0.0.1", 00:13:51.427 "trsvcid": "39348", 00:13:51.427 "trtype": "TCP" 00:13:51.427 }, 00:13:51.427 "qid": 0, 00:13:51.427 "state": "enabled", 00:13:51.427 "thread": "nvmf_tgt_poll_group_000" 00:13:51.427 } 00:13:51.427 ]' 00:13:51.427 14:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.427 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.427 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.684 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.685 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.685 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.685 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.685 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.943 14:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:13:52.508 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.508 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:52.508 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.508 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.508 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.766 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:52.766 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.766 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.766 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:52.766 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.024 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.281 00:13:53.281 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.281 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.281 14:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.539 { 00:13:53.539 "auth": { 00:13:53.539 "dhgroup": "null", 00:13:53.539 "digest": "sha384", 00:13:53.539 "state": "completed" 00:13:53.539 }, 00:13:53.539 "cntlid": 49, 00:13:53.539 "listen_address": { 00:13:53.539 "adrfam": "IPv4", 00:13:53.539 "traddr": "10.0.0.2", 00:13:53.539 "trsvcid": "4420", 00:13:53.539 "trtype": "TCP" 00:13:53.539 }, 00:13:53.539 "peer_address": { 00:13:53.539 "adrfam": "IPv4", 00:13:53.539 "traddr": "10.0.0.1", 00:13:53.539 "trsvcid": "39380", 00:13:53.539 "trtype": "TCP" 00:13:53.539 }, 00:13:53.539 "qid": 0, 00:13:53.539 "state": "enabled", 00:13:53.539 "thread": "nvmf_tgt_poll_group_000" 00:13:53.539 } 00:13:53.539 ]' 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.539 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.797 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:53.797 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.797 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.797 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.797 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.055 14:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:54.987 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.245 14:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.503 00:13:55.503 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.503 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.503 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.760 { 00:13:55.760 "auth": { 00:13:55.760 "dhgroup": "null", 00:13:55.760 "digest": "sha384", 00:13:55.760 "state": "completed" 00:13:55.760 }, 00:13:55.760 "cntlid": 51, 00:13:55.760 "listen_address": { 00:13:55.760 "adrfam": "IPv4", 00:13:55.760 "traddr": "10.0.0.2", 00:13:55.760 "trsvcid": "4420", 00:13:55.760 "trtype": "TCP" 00:13:55.760 }, 00:13:55.760 "peer_address": { 00:13:55.760 "adrfam": "IPv4", 00:13:55.760 "traddr": "10.0.0.1", 00:13:55.760 "trsvcid": "54242", 00:13:55.760 "trtype": "TCP" 00:13:55.760 }, 00:13:55.760 "qid": 0, 00:13:55.760 "state": "enabled", 00:13:55.760 "thread": "nvmf_tgt_poll_group_000" 00:13:55.760 } 00:13:55.760 ]' 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.760 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.017 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:56.017 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.018 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.018 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.018 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.275 14:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:13:56.841 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.841 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:56.841 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.841 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.113 14:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.685 00:13:57.685 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.685 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.685 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.944 { 00:13:57.944 "auth": { 00:13:57.944 "dhgroup": "null", 00:13:57.944 "digest": "sha384", 00:13:57.944 "state": "completed" 00:13:57.944 }, 00:13:57.944 "cntlid": 53, 00:13:57.944 "listen_address": { 00:13:57.944 "adrfam": "IPv4", 00:13:57.944 "traddr": "10.0.0.2", 00:13:57.944 "trsvcid": "4420", 00:13:57.944 "trtype": "TCP" 00:13:57.944 }, 00:13:57.944 "peer_address": { 00:13:57.944 "adrfam": "IPv4", 00:13:57.944 "traddr": "10.0.0.1", 00:13:57.944 "trsvcid": "54258", 00:13:57.944 "trtype": "TCP" 00:13:57.944 }, 00:13:57.944 "qid": 0, 00:13:57.944 "state": "enabled", 00:13:57.944 "thread": "nvmf_tgt_poll_group_000" 00:13:57.944 } 00:13:57.944 ]' 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.944 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.202 14:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.138 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.398 14:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.656 00:13:59.656 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.656 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.656 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.915 { 00:13:59.915 "auth": { 00:13:59.915 "dhgroup": "null", 00:13:59.915 "digest": "sha384", 00:13:59.915 "state": "completed" 00:13:59.915 }, 00:13:59.915 "cntlid": 55, 00:13:59.915 "listen_address": { 00:13:59.915 "adrfam": "IPv4", 00:13:59.915 "traddr": "10.0.0.2", 00:13:59.915 "trsvcid": "4420", 00:13:59.915 "trtype": "TCP" 00:13:59.915 }, 00:13:59.915 "peer_address": { 00:13:59.915 "adrfam": "IPv4", 00:13:59.915 "traddr": "10.0.0.1", 00:13:59.915 "trsvcid": "54278", 00:13:59.915 "trtype": "TCP" 00:13:59.915 }, 00:13:59.915 "qid": 0, 00:13:59.915 "state": "enabled", 00:13:59.915 "thread": "nvmf_tgt_poll_group_000" 00:13:59.915 } 00:13:59.915 ]' 00:13:59.915 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.173 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.174 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.432 14:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:01.366 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.367 14:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.933 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.933 { 00:14:01.933 "auth": { 00:14:01.933 "dhgroup": "ffdhe2048", 00:14:01.933 "digest": "sha384", 00:14:01.933 "state": "completed" 00:14:01.933 }, 00:14:01.933 "cntlid": 57, 00:14:01.933 "listen_address": { 00:14:01.933 "adrfam": "IPv4", 00:14:01.933 "traddr": "10.0.0.2", 00:14:01.933 "trsvcid": "4420", 00:14:01.933 "trtype": "TCP" 00:14:01.933 }, 00:14:01.933 "peer_address": { 00:14:01.933 "adrfam": "IPv4", 00:14:01.933 "traddr": "10.0.0.1", 00:14:01.933 "trsvcid": "54298", 00:14:01.933 "trtype": "TCP" 00:14:01.933 }, 00:14:01.933 "qid": 0, 00:14:01.933 "state": "enabled", 00:14:01.933 "thread": "nvmf_tgt_poll_group_000" 00:14:01.933 } 00:14:01.933 ]' 00:14:01.933 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.191 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.450 14:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:03.384 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.384 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.385 14:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.385 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.952 00:14:03.952 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.952 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.952 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.211 { 00:14:04.211 "auth": { 00:14:04.211 "dhgroup": "ffdhe2048", 00:14:04.211 "digest": "sha384", 00:14:04.211 "state": "completed" 00:14:04.211 }, 00:14:04.211 "cntlid": 59, 00:14:04.211 "listen_address": { 00:14:04.211 "adrfam": "IPv4", 00:14:04.211 "traddr": "10.0.0.2", 00:14:04.211 "trsvcid": "4420", 00:14:04.211 "trtype": "TCP" 00:14:04.211 }, 00:14:04.211 "peer_address": { 00:14:04.211 "adrfam": "IPv4", 00:14:04.211 "traddr": "10.0.0.1", 00:14:04.211 "trsvcid": "54326", 00:14:04.211 "trtype": "TCP" 00:14:04.211 }, 00:14:04.211 "qid": 0, 00:14:04.211 "state": "enabled", 00:14:04.211 "thread": "nvmf_tgt_poll_group_000" 00:14:04.211 } 00:14:04.211 ]' 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.211 14:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.470 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.405 14:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.405 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.663 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.663 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.663 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.921 00:14:05.921 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.921 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.921 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.179 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.179 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.179 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.179 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.180 14:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.180 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.180 { 00:14:06.180 "auth": { 00:14:06.180 "dhgroup": "ffdhe2048", 00:14:06.180 "digest": "sha384", 00:14:06.180 "state": "completed" 00:14:06.180 }, 00:14:06.180 "cntlid": 61, 00:14:06.180 "listen_address": { 00:14:06.180 "adrfam": "IPv4", 00:14:06.180 "traddr": "10.0.0.2", 00:14:06.180 "trsvcid": "4420", 00:14:06.180 "trtype": "TCP" 00:14:06.180 }, 00:14:06.180 "peer_address": { 00:14:06.180 "adrfam": "IPv4", 00:14:06.180 "traddr": "10.0.0.1", 00:14:06.180 "trsvcid": "34220", 00:14:06.180 "trtype": "TCP" 00:14:06.180 }, 00:14:06.180 "qid": 0, 00:14:06.180 "state": "enabled", 00:14:06.180 "thread": "nvmf_tgt_poll_group_000" 00:14:06.180 } 00:14:06.180 ]' 00:14:06.180 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.180 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.180 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.437 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.437 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.437 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.437 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.437 14:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.696 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:07.630 14:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.630 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.196 00:14:08.196 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.196 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.196 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.454 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.454 { 00:14:08.454 "auth": { 00:14:08.454 "dhgroup": "ffdhe2048", 00:14:08.454 "digest": "sha384", 00:14:08.455 "state": "completed" 00:14:08.455 }, 00:14:08.455 "cntlid": 63, 00:14:08.455 "listen_address": { 00:14:08.455 "adrfam": "IPv4", 00:14:08.455 "traddr": "10.0.0.2", 00:14:08.455 "trsvcid": "4420", 00:14:08.455 "trtype": "TCP" 00:14:08.455 }, 00:14:08.455 "peer_address": { 00:14:08.455 "adrfam": "IPv4", 00:14:08.455 "traddr": "10.0.0.1", 00:14:08.455 "trsvcid": "34248", 00:14:08.455 "trtype": "TCP" 00:14:08.455 }, 00:14:08.455 "qid": 0, 00:14:08.455 "state": "enabled", 00:14:08.455 "thread": "nvmf_tgt_poll_group_000" 00:14:08.455 } 00:14:08.455 ]' 00:14:08.455 14:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.455 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.455 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.455 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.455 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.713 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.713 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.713 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.971 14:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:09.538 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.796 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.364 00:14:10.364 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.364 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.364 14:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.627 { 00:14:10.627 "auth": { 00:14:10.627 "dhgroup": "ffdhe3072", 00:14:10.627 "digest": "sha384", 00:14:10.627 "state": "completed" 00:14:10.627 }, 00:14:10.627 "cntlid": 65, 00:14:10.627 "listen_address": { 00:14:10.627 "adrfam": "IPv4", 00:14:10.627 "traddr": "10.0.0.2", 00:14:10.627 "trsvcid": "4420", 00:14:10.627 "trtype": "TCP" 00:14:10.627 }, 00:14:10.627 "peer_address": { 00:14:10.627 "adrfam": "IPv4", 00:14:10.627 "traddr": "10.0.0.1", 00:14:10.627 "trsvcid": "34280", 00:14:10.627 "trtype": "TCP" 00:14:10.627 }, 00:14:10.627 "qid": 0, 00:14:10.627 "state": "enabled", 00:14:10.627 "thread": "nvmf_tgt_poll_group_000" 00:14:10.627 } 00:14:10.627 ]' 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.627 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.912 14:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:11.845 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.104 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.362 00:14:12.362 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.362 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.362 14:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.620 { 00:14:12.620 "auth": { 00:14:12.620 "dhgroup": "ffdhe3072", 00:14:12.620 "digest": "sha384", 00:14:12.620 "state": "completed" 00:14:12.620 }, 00:14:12.620 "cntlid": 67, 00:14:12.620 "listen_address": { 00:14:12.620 "adrfam": "IPv4", 00:14:12.620 "traddr": "10.0.0.2", 00:14:12.620 "trsvcid": "4420", 00:14:12.620 "trtype": "TCP" 00:14:12.620 }, 00:14:12.620 "peer_address": { 00:14:12.620 "adrfam": "IPv4", 00:14:12.620 "traddr": "10.0.0.1", 00:14:12.620 "trsvcid": "34294", 00:14:12.620 "trtype": "TCP" 00:14:12.620 }, 00:14:12.620 "qid": 0, 00:14:12.620 "state": "enabled", 00:14:12.620 "thread": "nvmf_tgt_poll_group_000" 00:14:12.620 } 00:14:12.620 ]' 00:14:12.620 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.878 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.136 14:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:13.714 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.972 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.230 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.488 00:14:14.488 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.488 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.488 14:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.745 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.745 { 00:14:14.745 "auth": { 00:14:14.745 "dhgroup": "ffdhe3072", 00:14:14.746 "digest": "sha384", 00:14:14.746 "state": "completed" 00:14:14.746 }, 00:14:14.746 "cntlid": 69, 00:14:14.746 "listen_address": { 00:14:14.746 "adrfam": "IPv4", 00:14:14.746 "traddr": "10.0.0.2", 00:14:14.746 "trsvcid": "4420", 00:14:14.746 "trtype": "TCP" 00:14:14.746 }, 00:14:14.746 "peer_address": { 00:14:14.746 "adrfam": "IPv4", 00:14:14.746 "traddr": "10.0.0.1", 00:14:14.746 "trsvcid": "34324", 00:14:14.746 "trtype": "TCP" 00:14:14.746 }, 00:14:14.746 "qid": 0, 00:14:14.746 "state": "enabled", 00:14:14.746 "thread": "nvmf_tgt_poll_group_000" 00:14:14.746 } 00:14:14.746 ]' 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.746 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.311 14:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.876 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.133 14:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.698 00:14:16.698 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.698 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.698 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.956 { 00:14:16.956 "auth": { 00:14:16.956 "dhgroup": "ffdhe3072", 00:14:16.956 "digest": "sha384", 00:14:16.956 "state": "completed" 00:14:16.956 }, 00:14:16.956 "cntlid": 71, 00:14:16.956 "listen_address": { 00:14:16.956 "adrfam": "IPv4", 00:14:16.956 "traddr": "10.0.0.2", 00:14:16.956 "trsvcid": "4420", 00:14:16.956 "trtype": "TCP" 00:14:16.956 }, 00:14:16.956 "peer_address": { 00:14:16.956 "adrfam": "IPv4", 00:14:16.956 "traddr": "10.0.0.1", 00:14:16.956 "trsvcid": "36922", 00:14:16.956 "trtype": "TCP" 00:14:16.956 }, 00:14:16.956 "qid": 0, 00:14:16.956 "state": "enabled", 00:14:16.956 "thread": "nvmf_tgt_poll_group_000" 00:14:16.956 } 00:14:16.956 ]' 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.956 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.534 14:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.119 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.377 14:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.943 00:14:18.943 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.943 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.943 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.201 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.202 { 00:14:19.202 "auth": { 00:14:19.202 "dhgroup": "ffdhe4096", 00:14:19.202 "digest": "sha384", 00:14:19.202 "state": "completed" 00:14:19.202 }, 00:14:19.202 "cntlid": 73, 00:14:19.202 "listen_address": { 00:14:19.202 "adrfam": "IPv4", 00:14:19.202 "traddr": "10.0.0.2", 00:14:19.202 "trsvcid": "4420", 00:14:19.202 "trtype": "TCP" 00:14:19.202 }, 00:14:19.202 "peer_address": { 00:14:19.202 "adrfam": "IPv4", 00:14:19.202 "traddr": "10.0.0.1", 00:14:19.202 "trsvcid": "36950", 00:14:19.202 "trtype": "TCP" 00:14:19.202 }, 00:14:19.202 "qid": 0, 00:14:19.202 "state": "enabled", 00:14:19.202 "thread": "nvmf_tgt_poll_group_000" 00:14:19.202 } 00:14:19.202 ]' 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.202 14:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.460 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.395 14:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.960 00:14:20.960 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.960 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.960 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.218 { 00:14:21.218 "auth": { 00:14:21.218 "dhgroup": "ffdhe4096", 00:14:21.218 "digest": "sha384", 00:14:21.218 "state": "completed" 00:14:21.218 }, 00:14:21.218 "cntlid": 75, 00:14:21.218 "listen_address": { 00:14:21.218 "adrfam": "IPv4", 00:14:21.218 "traddr": "10.0.0.2", 00:14:21.218 "trsvcid": "4420", 00:14:21.218 "trtype": "TCP" 00:14:21.218 }, 00:14:21.218 "peer_address": { 00:14:21.218 "adrfam": "IPv4", 00:14:21.218 "traddr": "10.0.0.1", 00:14:21.218 "trsvcid": "36984", 00:14:21.218 "trtype": "TCP" 00:14:21.218 }, 00:14:21.218 "qid": 0, 00:14:21.218 "state": "enabled", 00:14:21.218 "thread": "nvmf_tgt_poll_group_000" 00:14:21.218 } 00:14:21.218 ]' 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.218 14:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.476 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.411 14:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.669 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.927 00:14:22.927 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.927 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.927 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.185 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.185 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.185 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.185 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.442 { 00:14:23.442 "auth": { 00:14:23.442 "dhgroup": "ffdhe4096", 00:14:23.442 "digest": "sha384", 00:14:23.442 "state": "completed" 00:14:23.442 }, 00:14:23.442 "cntlid": 77, 00:14:23.442 "listen_address": { 00:14:23.442 "adrfam": "IPv4", 00:14:23.442 "traddr": "10.0.0.2", 00:14:23.442 "trsvcid": "4420", 00:14:23.442 "trtype": "TCP" 00:14:23.442 }, 00:14:23.442 "peer_address": { 00:14:23.442 "adrfam": "IPv4", 00:14:23.442 "traddr": "10.0.0.1", 00:14:23.442 "trsvcid": "37000", 00:14:23.442 "trtype": "TCP" 00:14:23.442 }, 00:14:23.442 "qid": 0, 00:14:23.442 "state": "enabled", 00:14:23.442 "thread": "nvmf_tgt_poll_group_000" 00:14:23.442 } 00:14:23.442 ]' 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.442 14:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.698 14:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:24.629 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.629 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.630 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.887 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.144 00:14:25.144 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.144 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.144 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.402 { 00:14:25.402 "auth": { 00:14:25.402 "dhgroup": "ffdhe4096", 00:14:25.402 "digest": "sha384", 00:14:25.402 "state": "completed" 00:14:25.402 }, 00:14:25.402 "cntlid": 79, 00:14:25.402 "listen_address": { 00:14:25.402 "adrfam": "IPv4", 00:14:25.402 "traddr": "10.0.0.2", 00:14:25.402 "trsvcid": "4420", 00:14:25.402 "trtype": "TCP" 00:14:25.402 }, 00:14:25.402 "peer_address": { 00:14:25.402 "adrfam": "IPv4", 00:14:25.402 "traddr": "10.0.0.1", 00:14:25.402 "trsvcid": "35962", 00:14:25.402 "trtype": "TCP" 00:14:25.402 }, 00:14:25.402 "qid": 0, 00:14:25.402 "state": "enabled", 00:14:25.402 "thread": "nvmf_tgt_poll_group_000" 00:14:25.402 } 00:14:25.402 ]' 00:14:25.402 14:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.402 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.402 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.659 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.659 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.659 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.659 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.659 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.917 14:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.482 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.740 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.306 00:14:27.306 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.306 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.306 14:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.564 { 00:14:27.564 "auth": { 00:14:27.564 "dhgroup": "ffdhe6144", 00:14:27.564 "digest": "sha384", 00:14:27.564 "state": "completed" 00:14:27.564 }, 00:14:27.564 "cntlid": 81, 00:14:27.564 "listen_address": { 00:14:27.564 "adrfam": "IPv4", 00:14:27.564 "traddr": "10.0.0.2", 00:14:27.564 "trsvcid": "4420", 00:14:27.564 "trtype": "TCP" 00:14:27.564 }, 00:14:27.564 "peer_address": { 00:14:27.564 "adrfam": "IPv4", 00:14:27.564 "traddr": "10.0.0.1", 00:14:27.564 "trsvcid": "35998", 00:14:27.564 "trtype": "TCP" 00:14:27.564 }, 00:14:27.564 "qid": 0, 00:14:27.564 "state": "enabled", 00:14:27.564 "thread": "nvmf_tgt_poll_group_000" 00:14:27.564 } 00:14:27.564 ]' 00:14:27.564 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.822 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.080 14:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:29.013 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.014 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.271 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.272 14:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.272 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.272 14:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.529 00:14:29.529 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.529 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.529 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.096 { 00:14:30.096 "auth": { 00:14:30.096 "dhgroup": "ffdhe6144", 00:14:30.096 "digest": "sha384", 00:14:30.096 "state": "completed" 00:14:30.096 }, 00:14:30.096 "cntlid": 83, 00:14:30.096 "listen_address": { 00:14:30.096 "adrfam": "IPv4", 00:14:30.096 "traddr": "10.0.0.2", 00:14:30.096 "trsvcid": "4420", 00:14:30.096 "trtype": "TCP" 00:14:30.096 }, 00:14:30.096 "peer_address": { 00:14:30.096 "adrfam": "IPv4", 00:14:30.096 "traddr": "10.0.0.1", 00:14:30.096 "trsvcid": "36020", 00:14:30.096 "trtype": "TCP" 00:14:30.096 }, 00:14:30.096 "qid": 0, 00:14:30.096 "state": "enabled", 00:14:30.096 "thread": "nvmf_tgt_poll_group_000" 00:14:30.096 } 00:14:30.096 ]' 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.096 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.354 14:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:30.921 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.921 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:30.921 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.921 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.179 14:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.832 00:14:31.832 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.832 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.833 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.098 { 00:14:32.098 "auth": { 00:14:32.098 "dhgroup": "ffdhe6144", 00:14:32.098 "digest": "sha384", 00:14:32.098 "state": "completed" 00:14:32.098 }, 00:14:32.098 "cntlid": 85, 00:14:32.098 "listen_address": { 00:14:32.098 "adrfam": "IPv4", 00:14:32.098 "traddr": "10.0.0.2", 00:14:32.098 "trsvcid": "4420", 00:14:32.098 "trtype": "TCP" 00:14:32.098 }, 00:14:32.098 "peer_address": { 00:14:32.098 "adrfam": "IPv4", 00:14:32.098 "traddr": "10.0.0.1", 00:14:32.098 "trsvcid": "36032", 00:14:32.098 "trtype": "TCP" 00:14:32.098 }, 00:14:32.098 "qid": 0, 00:14:32.098 "state": "enabled", 00:14:32.098 "thread": "nvmf_tgt_poll_group_000" 00:14:32.098 } 00:14:32.098 ]' 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.098 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.099 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.099 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.099 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.099 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.099 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.357 14:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.295 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.556 14:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.556 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.556 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.133 00:14:34.133 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.133 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.133 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.401 { 00:14:34.401 "auth": { 00:14:34.401 "dhgroup": "ffdhe6144", 00:14:34.401 "digest": "sha384", 00:14:34.401 "state": "completed" 00:14:34.401 }, 00:14:34.401 "cntlid": 87, 00:14:34.401 "listen_address": { 00:14:34.401 "adrfam": "IPv4", 00:14:34.401 "traddr": "10.0.0.2", 00:14:34.401 "trsvcid": "4420", 00:14:34.401 "trtype": "TCP" 00:14:34.401 }, 00:14:34.401 "peer_address": { 00:14:34.401 "adrfam": "IPv4", 00:14:34.401 "traddr": "10.0.0.1", 00:14:34.401 "trsvcid": "36080", 00:14:34.401 "trtype": "TCP" 00:14:34.401 }, 00:14:34.401 "qid": 0, 00:14:34.401 "state": "enabled", 00:14:34.401 "thread": "nvmf_tgt_poll_group_000" 00:14:34.401 } 00:14:34.401 ]' 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.401 14:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.667 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.615 14:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.874 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.440 00:14:36.440 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.440 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.440 14:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.700 { 00:14:36.700 "auth": { 00:14:36.700 "dhgroup": "ffdhe8192", 00:14:36.700 "digest": "sha384", 00:14:36.700 "state": "completed" 00:14:36.700 }, 00:14:36.700 "cntlid": 89, 00:14:36.700 "listen_address": { 00:14:36.700 "adrfam": "IPv4", 00:14:36.700 "traddr": "10.0.0.2", 00:14:36.700 "trsvcid": "4420", 00:14:36.700 "trtype": "TCP" 00:14:36.700 }, 00:14:36.700 "peer_address": { 00:14:36.700 "adrfam": "IPv4", 00:14:36.700 "traddr": "10.0.0.1", 00:14:36.700 "trsvcid": "39116", 00:14:36.700 "trtype": "TCP" 00:14:36.700 }, 00:14:36.700 "qid": 0, 00:14:36.700 "state": "enabled", 00:14:36.700 "thread": "nvmf_tgt_poll_group_000" 00:14:36.700 } 00:14:36.700 ]' 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.700 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.959 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.959 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.959 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.959 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.959 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.218 14:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.155 14:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.090 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.090 { 00:14:39.090 "auth": { 00:14:39.090 "dhgroup": "ffdhe8192", 00:14:39.090 "digest": "sha384", 00:14:39.090 "state": "completed" 00:14:39.090 }, 00:14:39.090 "cntlid": 91, 00:14:39.090 "listen_address": { 00:14:39.090 "adrfam": "IPv4", 00:14:39.090 "traddr": "10.0.0.2", 00:14:39.090 "trsvcid": "4420", 00:14:39.090 "trtype": "TCP" 00:14:39.090 }, 00:14:39.090 "peer_address": { 00:14:39.090 "adrfam": "IPv4", 00:14:39.090 "traddr": "10.0.0.1", 00:14:39.090 "trsvcid": "39148", 00:14:39.090 "trtype": "TCP" 00:14:39.090 }, 00:14:39.090 "qid": 0, 00:14:39.090 "state": "enabled", 00:14:39.090 "thread": "nvmf_tgt_poll_group_000" 00:14:39.090 } 00:14:39.090 ]' 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.090 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.349 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.349 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.349 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.349 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.349 14:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.607 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.543 14:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.543 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.479 00:14:41.479 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.479 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.479 14:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.479 { 00:14:41.479 "auth": { 00:14:41.479 "dhgroup": "ffdhe8192", 00:14:41.479 "digest": "sha384", 00:14:41.479 "state": "completed" 00:14:41.479 }, 00:14:41.479 "cntlid": 93, 00:14:41.479 "listen_address": { 00:14:41.479 "adrfam": "IPv4", 00:14:41.479 "traddr": "10.0.0.2", 00:14:41.479 "trsvcid": "4420", 00:14:41.479 "trtype": "TCP" 00:14:41.479 }, 00:14:41.479 "peer_address": { 00:14:41.479 "adrfam": "IPv4", 00:14:41.479 "traddr": "10.0.0.1", 00:14:41.479 "trsvcid": "39168", 00:14:41.479 "trtype": "TCP" 00:14:41.479 }, 00:14:41.479 "qid": 0, 00:14:41.479 "state": "enabled", 00:14:41.479 "thread": "nvmf_tgt_poll_group_000" 00:14:41.479 } 00:14:41.479 ]' 00:14:41.479 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.737 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.995 14:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:42.930 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.189 14:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.755 00:14:44.014 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.014 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.014 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.272 { 00:14:44.272 "auth": { 00:14:44.272 "dhgroup": "ffdhe8192", 00:14:44.272 "digest": "sha384", 00:14:44.272 "state": "completed" 00:14:44.272 }, 00:14:44.272 "cntlid": 95, 00:14:44.272 "listen_address": { 00:14:44.272 "adrfam": "IPv4", 00:14:44.272 "traddr": "10.0.0.2", 00:14:44.272 "trsvcid": "4420", 00:14:44.272 "trtype": "TCP" 00:14:44.272 }, 00:14:44.272 "peer_address": { 00:14:44.272 "adrfam": "IPv4", 00:14:44.272 "traddr": "10.0.0.1", 00:14:44.272 "trsvcid": "39196", 00:14:44.272 "trtype": "TCP" 00:14:44.272 }, 00:14:44.272 "qid": 0, 00:14:44.272 "state": "enabled", 00:14:44.272 "thread": "nvmf_tgt_poll_group_000" 00:14:44.272 } 00:14:44.272 ]' 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.272 14:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.529 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.462 14:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.719 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.285 00:14:46.285 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.285 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.285 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.543 { 00:14:46.543 "auth": { 00:14:46.543 "dhgroup": "null", 00:14:46.543 "digest": "sha512", 00:14:46.543 "state": "completed" 00:14:46.543 }, 00:14:46.543 "cntlid": 97, 00:14:46.543 "listen_address": { 00:14:46.543 "adrfam": "IPv4", 00:14:46.543 "traddr": "10.0.0.2", 00:14:46.543 "trsvcid": "4420", 00:14:46.543 "trtype": "TCP" 00:14:46.543 }, 00:14:46.543 "peer_address": { 00:14:46.543 "adrfam": "IPv4", 00:14:46.543 "traddr": "10.0.0.1", 00:14:46.543 "trsvcid": "41694", 00:14:46.543 "trtype": "TCP" 00:14:46.543 }, 00:14:46.543 "qid": 0, 00:14:46.543 "state": "enabled", 00:14:46.543 "thread": "nvmf_tgt_poll_group_000" 00:14:46.543 } 00:14:46.543 ]' 00:14:46.543 14:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.543 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.108 14:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:47.674 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:47.675 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.932 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.498 00:14:48.498 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.498 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.498 14:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.756 { 00:14:48.756 "auth": { 00:14:48.756 "dhgroup": "null", 00:14:48.756 "digest": "sha512", 00:14:48.756 "state": "completed" 00:14:48.756 }, 00:14:48.756 "cntlid": 99, 00:14:48.756 "listen_address": { 00:14:48.756 "adrfam": "IPv4", 00:14:48.756 "traddr": "10.0.0.2", 00:14:48.756 "trsvcid": "4420", 00:14:48.756 "trtype": "TCP" 00:14:48.756 }, 00:14:48.756 "peer_address": { 00:14:48.756 "adrfam": "IPv4", 00:14:48.756 "traddr": "10.0.0.1", 00:14:48.756 "trsvcid": "41722", 00:14:48.756 "trtype": "TCP" 00:14:48.756 }, 00:14:48.756 "qid": 0, 00:14:48.756 "state": "enabled", 00:14:48.756 "thread": "nvmf_tgt_poll_group_000" 00:14:48.756 } 00:14:48.756 ]' 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.756 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.014 14:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:49.948 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.206 14:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.465 00:14:50.465 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.465 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.465 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.032 { 00:14:51.032 "auth": { 00:14:51.032 "dhgroup": "null", 00:14:51.032 "digest": "sha512", 00:14:51.032 "state": "completed" 00:14:51.032 }, 00:14:51.032 "cntlid": 101, 00:14:51.032 "listen_address": { 00:14:51.032 "adrfam": "IPv4", 00:14:51.032 "traddr": "10.0.0.2", 00:14:51.032 "trsvcid": "4420", 00:14:51.032 "trtype": "TCP" 00:14:51.032 }, 00:14:51.032 "peer_address": { 00:14:51.032 "adrfam": "IPv4", 00:14:51.032 "traddr": "10.0.0.1", 00:14:51.032 "trsvcid": "41756", 00:14:51.032 "trtype": "TCP" 00:14:51.032 }, 00:14:51.032 "qid": 0, 00:14:51.032 "state": "enabled", 00:14:51.032 "thread": "nvmf_tgt_poll_group_000" 00:14:51.032 } 00:14:51.032 ]' 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.032 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.290 14:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:52.228 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.486 14:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.744 00:14:52.744 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.744 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.744 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.004 { 00:14:53.004 "auth": { 00:14:53.004 "dhgroup": "null", 00:14:53.004 "digest": "sha512", 00:14:53.004 "state": "completed" 00:14:53.004 }, 00:14:53.004 "cntlid": 103, 00:14:53.004 "listen_address": { 00:14:53.004 "adrfam": "IPv4", 00:14:53.004 "traddr": "10.0.0.2", 00:14:53.004 "trsvcid": "4420", 00:14:53.004 "trtype": "TCP" 00:14:53.004 }, 00:14:53.004 "peer_address": { 00:14:53.004 "adrfam": "IPv4", 00:14:53.004 "traddr": "10.0.0.1", 00:14:53.004 "trsvcid": "41784", 00:14:53.004 "trtype": "TCP" 00:14:53.004 }, 00:14:53.004 "qid": 0, 00:14:53.004 "state": "enabled", 00:14:53.004 "thread": "nvmf_tgt_poll_group_000" 00:14:53.004 } 00:14:53.004 ]' 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:53.004 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.263 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.263 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.263 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.521 14:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:54.088 14:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:54.655 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.656 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.914 00:14:54.914 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.914 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.914 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.172 { 00:14:55.172 "auth": { 00:14:55.172 "dhgroup": "ffdhe2048", 00:14:55.172 "digest": "sha512", 00:14:55.172 "state": "completed" 00:14:55.172 }, 00:14:55.172 "cntlid": 105, 00:14:55.172 "listen_address": { 00:14:55.172 "adrfam": "IPv4", 00:14:55.172 "traddr": "10.0.0.2", 00:14:55.172 "trsvcid": "4420", 00:14:55.172 "trtype": "TCP" 00:14:55.172 }, 00:14:55.172 "peer_address": { 00:14:55.172 "adrfam": "IPv4", 00:14:55.172 "traddr": "10.0.0.1", 00:14:55.172 "trsvcid": "37416", 00:14:55.172 "trtype": "TCP" 00:14:55.172 }, 00:14:55.172 "qid": 0, 00:14:55.172 "state": "enabled", 00:14:55.172 "thread": "nvmf_tgt_poll_group_000" 00:14:55.172 } 00:14:55.172 ]' 00:14:55.172 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.431 14:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.689 14:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:56.625 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.883 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.141 00:14:57.141 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.141 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.141 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.399 { 00:14:57.399 "auth": { 00:14:57.399 "dhgroup": "ffdhe2048", 00:14:57.399 "digest": "sha512", 00:14:57.399 "state": "completed" 00:14:57.399 }, 00:14:57.399 "cntlid": 107, 00:14:57.399 "listen_address": { 00:14:57.399 "adrfam": "IPv4", 00:14:57.399 "traddr": "10.0.0.2", 00:14:57.399 "trsvcid": "4420", 00:14:57.399 "trtype": "TCP" 00:14:57.399 }, 00:14:57.399 "peer_address": { 00:14:57.399 "adrfam": "IPv4", 00:14:57.399 "traddr": "10.0.0.1", 00:14:57.399 "trsvcid": "37452", 00:14:57.399 "trtype": "TCP" 00:14:57.399 }, 00:14:57.399 "qid": 0, 00:14:57.399 "state": "enabled", 00:14:57.399 "thread": "nvmf_tgt_poll_group_000" 00:14:57.399 } 00:14:57.399 ]' 00:14:57.399 14:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.399 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.399 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.658 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.658 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.658 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.658 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.658 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.917 14:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:58.854 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.146 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.410 00:14:59.410 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.410 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.410 14:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.667 { 00:14:59.667 "auth": { 00:14:59.667 "dhgroup": "ffdhe2048", 00:14:59.667 "digest": "sha512", 00:14:59.667 "state": "completed" 00:14:59.667 }, 00:14:59.667 "cntlid": 109, 00:14:59.667 "listen_address": { 00:14:59.667 "adrfam": "IPv4", 00:14:59.667 "traddr": "10.0.0.2", 00:14:59.667 "trsvcid": "4420", 00:14:59.667 "trtype": "TCP" 00:14:59.667 }, 00:14:59.667 "peer_address": { 00:14:59.667 "adrfam": "IPv4", 00:14:59.667 "traddr": "10.0.0.1", 00:14:59.667 "trsvcid": "37494", 00:14:59.667 "trtype": "TCP" 00:14:59.667 }, 00:14:59.667 "qid": 0, 00:14:59.667 "state": "enabled", 00:14:59.667 "thread": "nvmf_tgt_poll_group_000" 00:14:59.667 } 00:14:59.667 ]' 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.667 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.926 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.926 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.926 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.926 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.926 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.184 14:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:01.117 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.118 14:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.682 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.683 { 00:15:01.683 "auth": { 00:15:01.683 "dhgroup": "ffdhe2048", 00:15:01.683 "digest": "sha512", 00:15:01.683 "state": "completed" 00:15:01.683 }, 00:15:01.683 "cntlid": 111, 00:15:01.683 "listen_address": { 00:15:01.683 "adrfam": "IPv4", 00:15:01.683 "traddr": "10.0.0.2", 00:15:01.683 "trsvcid": "4420", 00:15:01.683 "trtype": "TCP" 00:15:01.683 }, 00:15:01.683 "peer_address": { 00:15:01.683 "adrfam": "IPv4", 00:15:01.683 "traddr": "10.0.0.1", 00:15:01.683 "trsvcid": "37528", 00:15:01.683 "trtype": "TCP" 00:15:01.683 }, 00:15:01.683 "qid": 0, 00:15:01.683 "state": "enabled", 00:15:01.683 "thread": "nvmf_tgt_poll_group_000" 00:15:01.683 } 00:15:01.683 ]' 00:15:01.683 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.940 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.198 14:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.133 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.391 14:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.648 00:15:03.648 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.648 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.648 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.906 { 00:15:03.906 "auth": { 00:15:03.906 "dhgroup": "ffdhe3072", 00:15:03.906 "digest": "sha512", 00:15:03.906 "state": "completed" 00:15:03.906 }, 00:15:03.906 "cntlid": 113, 00:15:03.906 "listen_address": { 00:15:03.906 "adrfam": "IPv4", 00:15:03.906 "traddr": "10.0.0.2", 00:15:03.906 "trsvcid": "4420", 00:15:03.906 "trtype": "TCP" 00:15:03.906 }, 00:15:03.906 "peer_address": { 00:15:03.906 "adrfam": "IPv4", 00:15:03.906 "traddr": "10.0.0.1", 00:15:03.906 "trsvcid": "37560", 00:15:03.906 "trtype": "TCP" 00:15:03.906 }, 00:15:03.906 "qid": 0, 00:15:03.906 "state": "enabled", 00:15:03.906 "thread": "nvmf_tgt_poll_group_000" 00:15:03.906 } 00:15:03.906 ]' 00:15:03.906 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.164 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.422 14:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:15:05.356 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.357 14:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.923 00:15:05.923 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.923 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.923 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.181 { 00:15:06.181 "auth": { 00:15:06.181 "dhgroup": "ffdhe3072", 00:15:06.181 "digest": "sha512", 00:15:06.181 "state": "completed" 00:15:06.181 }, 00:15:06.181 "cntlid": 115, 00:15:06.181 "listen_address": { 00:15:06.181 "adrfam": "IPv4", 00:15:06.181 "traddr": "10.0.0.2", 00:15:06.181 "trsvcid": "4420", 00:15:06.181 "trtype": "TCP" 00:15:06.181 }, 00:15:06.181 "peer_address": { 00:15:06.181 "adrfam": "IPv4", 00:15:06.181 "traddr": "10.0.0.1", 00:15:06.181 "trsvcid": "36270", 00:15:06.181 "trtype": "TCP" 00:15:06.181 }, 00:15:06.181 "qid": 0, 00:15:06.181 "state": "enabled", 00:15:06.181 "thread": "nvmf_tgt_poll_group_000" 00:15:06.181 } 00:15:06.181 ]' 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.181 14:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.440 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:15:07.400 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:07.401 14:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.659 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.918 00:15:07.918 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.918 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.918 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.176 { 00:15:08.176 "auth": { 00:15:08.176 "dhgroup": "ffdhe3072", 00:15:08.176 "digest": "sha512", 00:15:08.176 "state": "completed" 00:15:08.176 }, 00:15:08.176 "cntlid": 117, 00:15:08.176 "listen_address": { 00:15:08.176 "adrfam": "IPv4", 00:15:08.176 "traddr": "10.0.0.2", 00:15:08.176 "trsvcid": "4420", 00:15:08.176 "trtype": "TCP" 00:15:08.176 }, 00:15:08.176 "peer_address": { 00:15:08.176 "adrfam": "IPv4", 00:15:08.176 "traddr": "10.0.0.1", 00:15:08.176 "trsvcid": "36302", 00:15:08.176 "trtype": "TCP" 00:15:08.176 }, 00:15:08.176 "qid": 0, 00:15:08.176 "state": "enabled", 00:15:08.176 "thread": "nvmf_tgt_poll_group_000" 00:15:08.176 } 00:15:08.176 ]' 00:15:08.176 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.434 14:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.693 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:09.626 14:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.626 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.885 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.885 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.885 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:10.143 00:15:10.143 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.143 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.143 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.401 { 00:15:10.401 "auth": { 00:15:10.401 "dhgroup": "ffdhe3072", 00:15:10.401 "digest": "sha512", 00:15:10.401 "state": "completed" 00:15:10.401 }, 00:15:10.401 "cntlid": 119, 00:15:10.401 "listen_address": { 00:15:10.401 "adrfam": "IPv4", 00:15:10.401 "traddr": "10.0.0.2", 00:15:10.401 "trsvcid": "4420", 00:15:10.401 "trtype": "TCP" 00:15:10.401 }, 00:15:10.401 "peer_address": { 00:15:10.401 "adrfam": "IPv4", 00:15:10.401 "traddr": "10.0.0.1", 00:15:10.401 "trsvcid": "36324", 00:15:10.401 "trtype": "TCP" 00:15:10.401 }, 00:15:10.401 "qid": 0, 00:15:10.401 "state": "enabled", 00:15:10.401 "thread": "nvmf_tgt_poll_group_000" 00:15:10.401 } 00:15:10.401 ]' 00:15:10.401 14:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.401 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.401 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.659 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.659 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.659 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.659 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.659 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.917 14:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.850 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.416 00:15:12.416 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.416 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.416 14:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.672 { 00:15:12.672 "auth": { 00:15:12.672 "dhgroup": "ffdhe4096", 00:15:12.672 "digest": "sha512", 00:15:12.672 "state": "completed" 00:15:12.672 }, 00:15:12.672 "cntlid": 121, 00:15:12.672 "listen_address": { 00:15:12.672 "adrfam": "IPv4", 00:15:12.672 "traddr": "10.0.0.2", 00:15:12.672 "trsvcid": "4420", 00:15:12.672 "trtype": "TCP" 00:15:12.672 }, 00:15:12.672 "peer_address": { 00:15:12.672 "adrfam": "IPv4", 00:15:12.672 "traddr": "10.0.0.1", 00:15:12.672 "trsvcid": "36342", 00:15:12.672 "trtype": "TCP" 00:15:12.672 }, 00:15:12.672 "qid": 0, 00:15:12.672 "state": "enabled", 00:15:12.672 "thread": "nvmf_tgt_poll_group_000" 00:15:12.672 } 00:15:12.672 ]' 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.672 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.930 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.930 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.930 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.187 14:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:13.753 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.316 14:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.573 00:15:14.573 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.573 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.573 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.831 { 00:15:14.831 "auth": { 00:15:14.831 "dhgroup": "ffdhe4096", 00:15:14.831 "digest": "sha512", 00:15:14.831 "state": "completed" 00:15:14.831 }, 00:15:14.831 "cntlid": 123, 00:15:14.831 "listen_address": { 00:15:14.831 "adrfam": "IPv4", 00:15:14.831 "traddr": "10.0.0.2", 00:15:14.831 "trsvcid": "4420", 00:15:14.831 "trtype": "TCP" 00:15:14.831 }, 00:15:14.831 "peer_address": { 00:15:14.831 "adrfam": "IPv4", 00:15:14.831 "traddr": "10.0.0.1", 00:15:14.831 "trsvcid": "36378", 00:15:14.831 "trtype": "TCP" 00:15:14.831 }, 00:15:14.831 "qid": 0, 00:15:14.831 "state": "enabled", 00:15:14.831 "thread": "nvmf_tgt_poll_group_000" 00:15:14.831 } 00:15:14.831 ]' 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.831 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.088 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.088 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.089 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.089 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.089 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.346 14:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:15.912 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.477 14:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.735 00:15:16.735 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.735 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.735 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.993 { 00:15:16.993 "auth": { 00:15:16.993 "dhgroup": "ffdhe4096", 00:15:16.993 "digest": "sha512", 00:15:16.993 "state": "completed" 00:15:16.993 }, 00:15:16.993 "cntlid": 125, 00:15:16.993 "listen_address": { 00:15:16.993 "adrfam": "IPv4", 00:15:16.993 "traddr": "10.0.0.2", 00:15:16.993 "trsvcid": "4420", 00:15:16.993 "trtype": "TCP" 00:15:16.993 }, 00:15:16.993 "peer_address": { 00:15:16.993 "adrfam": "IPv4", 00:15:16.993 "traddr": "10.0.0.1", 00:15:16.993 "trsvcid": "48904", 00:15:16.993 "trtype": "TCP" 00:15:16.993 }, 00:15:16.993 "qid": 0, 00:15:16.993 "state": "enabled", 00:15:16.993 "thread": "nvmf_tgt_poll_group_000" 00:15:16.993 } 00:15:16.993 ]' 00:15:16.993 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.251 14:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.508 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:18.442 14:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.442 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.009 00:15:19.009 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.009 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.009 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.266 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.266 { 00:15:19.266 "auth": { 00:15:19.266 "dhgroup": "ffdhe4096", 00:15:19.266 "digest": "sha512", 00:15:19.266 "state": "completed" 00:15:19.266 }, 00:15:19.266 "cntlid": 127, 00:15:19.266 "listen_address": { 00:15:19.266 "adrfam": "IPv4", 00:15:19.266 "traddr": "10.0.0.2", 00:15:19.266 "trsvcid": "4420", 00:15:19.267 "trtype": "TCP" 00:15:19.267 }, 00:15:19.267 "peer_address": { 00:15:19.267 "adrfam": "IPv4", 00:15:19.267 "traddr": "10.0.0.1", 00:15:19.267 "trsvcid": "48932", 00:15:19.267 "trtype": "TCP" 00:15:19.267 }, 00:15:19.267 "qid": 0, 00:15:19.267 "state": "enabled", 00:15:19.267 "thread": "nvmf_tgt_poll_group_000" 00:15:19.267 } 00:15:19.267 ]' 00:15:19.267 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.267 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:19.267 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.267 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.267 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.553 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.553 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.553 14:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.811 14:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:20.744 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.744 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:20.744 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.744 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.744 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.745 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.745 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.745 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:20.745 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.002 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.003 14:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.568 00:15:21.568 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.568 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.568 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.825 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.825 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.825 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.825 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.082 { 00:15:22.082 "auth": { 00:15:22.082 "dhgroup": "ffdhe6144", 00:15:22.082 "digest": "sha512", 00:15:22.082 "state": "completed" 00:15:22.082 }, 00:15:22.082 "cntlid": 129, 00:15:22.082 "listen_address": { 00:15:22.082 "adrfam": "IPv4", 00:15:22.082 "traddr": "10.0.0.2", 00:15:22.082 "trsvcid": "4420", 00:15:22.082 "trtype": "TCP" 00:15:22.082 }, 00:15:22.082 "peer_address": { 00:15:22.082 "adrfam": "IPv4", 00:15:22.082 "traddr": "10.0.0.1", 00:15:22.082 "trsvcid": "48962", 00:15:22.082 "trtype": "TCP" 00:15:22.082 }, 00:15:22.082 "qid": 0, 00:15:22.082 "state": "enabled", 00:15:22.082 "thread": "nvmf_tgt_poll_group_000" 00:15:22.082 } 00:15:22.082 ]' 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.082 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.340 14:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:23.272 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.529 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.094 00:15:24.094 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.094 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.094 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.352 { 00:15:24.352 "auth": { 00:15:24.352 "dhgroup": "ffdhe6144", 00:15:24.352 "digest": "sha512", 00:15:24.352 "state": "completed" 00:15:24.352 }, 00:15:24.352 "cntlid": 131, 00:15:24.352 "listen_address": { 00:15:24.352 "adrfam": "IPv4", 00:15:24.352 "traddr": "10.0.0.2", 00:15:24.352 "trsvcid": "4420", 00:15:24.352 "trtype": "TCP" 00:15:24.352 }, 00:15:24.352 "peer_address": { 00:15:24.352 "adrfam": "IPv4", 00:15:24.352 "traddr": "10.0.0.1", 00:15:24.352 "trsvcid": "48978", 00:15:24.352 "trtype": "TCP" 00:15:24.352 }, 00:15:24.352 "qid": 0, 00:15:24.352 "state": "enabled", 00:15:24.352 "thread": "nvmf_tgt_poll_group_000" 00:15:24.352 } 00:15:24.352 ]' 00:15:24.352 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.608 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.866 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.846 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.847 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.847 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.847 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.847 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.847 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.412 00:15:26.412 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.412 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.412 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.670 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.670 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.670 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.670 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.671 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.671 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.671 { 00:15:26.671 "auth": { 00:15:26.671 "dhgroup": "ffdhe6144", 00:15:26.671 "digest": "sha512", 00:15:26.671 "state": "completed" 00:15:26.671 }, 00:15:26.671 "cntlid": 133, 00:15:26.671 "listen_address": { 00:15:26.671 "adrfam": "IPv4", 00:15:26.671 "traddr": "10.0.0.2", 00:15:26.671 "trsvcid": "4420", 00:15:26.671 "trtype": "TCP" 00:15:26.671 }, 00:15:26.671 "peer_address": { 00:15:26.671 "adrfam": "IPv4", 00:15:26.671 "traddr": "10.0.0.1", 00:15:26.671 "trsvcid": "46244", 00:15:26.671 "trtype": "TCP" 00:15:26.671 }, 00:15:26.671 "qid": 0, 00:15:26.671 "state": "enabled", 00:15:26.671 "thread": "nvmf_tgt_poll_group_000" 00:15:26.671 } 00:15:26.671 ]' 00:15:26.671 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.928 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.494 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:28.060 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.319 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.884 00:15:28.884 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.884 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.884 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.141 { 00:15:29.141 "auth": { 00:15:29.141 "dhgroup": "ffdhe6144", 00:15:29.141 "digest": "sha512", 00:15:29.141 "state": "completed" 00:15:29.141 }, 00:15:29.141 "cntlid": 135, 00:15:29.141 "listen_address": { 00:15:29.141 "adrfam": "IPv4", 00:15:29.141 "traddr": "10.0.0.2", 00:15:29.141 "trsvcid": "4420", 00:15:29.141 "trtype": "TCP" 00:15:29.141 }, 00:15:29.141 "peer_address": { 00:15:29.141 "adrfam": "IPv4", 00:15:29.141 "traddr": "10.0.0.1", 00:15:29.141 "trsvcid": "46274", 00:15:29.141 "trtype": "TCP" 00:15:29.141 }, 00:15:29.141 "qid": 0, 00:15:29.141 "state": "enabled", 00:15:29.141 "thread": "nvmf_tgt_poll_group_000" 00:15:29.141 } 00:15:29.141 ]' 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.141 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.399 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.399 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.399 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.399 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.399 14:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.658 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:30.223 14:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.480 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.466 00:15:31.466 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.466 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.466 14:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.466 { 00:15:31.466 "auth": { 00:15:31.466 "dhgroup": "ffdhe8192", 00:15:31.466 "digest": "sha512", 00:15:31.466 "state": "completed" 00:15:31.466 }, 00:15:31.466 "cntlid": 137, 00:15:31.466 "listen_address": { 00:15:31.466 "adrfam": "IPv4", 00:15:31.466 "traddr": "10.0.0.2", 00:15:31.466 "trsvcid": "4420", 00:15:31.466 "trtype": "TCP" 00:15:31.466 }, 00:15:31.466 "peer_address": { 00:15:31.466 "adrfam": "IPv4", 00:15:31.466 "traddr": "10.0.0.1", 00:15:31.466 "trsvcid": "46300", 00:15:31.466 "trtype": "TCP" 00:15:31.466 }, 00:15:31.466 "qid": 0, 00:15:31.466 "state": "enabled", 00:15:31.466 "thread": "nvmf_tgt_poll_group_000" 00:15:31.466 } 00:15:31.466 ]' 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.466 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.724 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.724 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.724 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.981 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:32.545 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.802 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.803 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.735 00:15:33.735 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.735 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.735 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.993 { 00:15:33.993 "auth": { 00:15:33.993 "dhgroup": "ffdhe8192", 00:15:33.993 "digest": "sha512", 00:15:33.993 "state": "completed" 00:15:33.993 }, 00:15:33.993 "cntlid": 139, 00:15:33.993 "listen_address": { 00:15:33.993 "adrfam": "IPv4", 00:15:33.993 "traddr": "10.0.0.2", 00:15:33.993 "trsvcid": "4420", 00:15:33.993 "trtype": "TCP" 00:15:33.993 }, 00:15:33.993 "peer_address": { 00:15:33.993 "adrfam": "IPv4", 00:15:33.993 "traddr": "10.0.0.1", 00:15:33.993 "trsvcid": "46318", 00:15:33.993 "trtype": "TCP" 00:15:33.993 }, 00:15:33.993 "qid": 0, 00:15:33.993 "state": "enabled", 00:15:33.993 "thread": "nvmf_tgt_poll_group_000" 00:15:33.993 } 00:15:33.993 ]' 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.993 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.251 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:01:ZWRlNDA4ZWVjZDU1Y2E0MWJhZmIyNjY5NTRkMWU1NTEtqJGN: --dhchap-ctrl-secret DHHC-1:02:ZDNmOTM1NGMwZWUxNTliNGRhN2EyZWUyMDAxMTMwZDUwNGRmODVkMWYyYjYyMDhlAZ8fIg==: 00:15:35.184 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:35.185 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.444 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.444 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.374 00:15:36.374 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.374 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.374 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.374 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.374 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.374 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.631 { 00:15:36.631 "auth": { 00:15:36.631 "dhgroup": "ffdhe8192", 00:15:36.631 "digest": "sha512", 00:15:36.631 "state": "completed" 00:15:36.631 }, 00:15:36.631 "cntlid": 141, 00:15:36.631 "listen_address": { 00:15:36.631 "adrfam": "IPv4", 00:15:36.631 "traddr": "10.0.0.2", 00:15:36.631 "trsvcid": "4420", 00:15:36.631 "trtype": "TCP" 00:15:36.631 }, 00:15:36.631 "peer_address": { 00:15:36.631 "adrfam": "IPv4", 00:15:36.631 "traddr": "10.0.0.1", 00:15:36.631 "trsvcid": "36210", 00:15:36.631 "trtype": "TCP" 00:15:36.631 }, 00:15:36.631 "qid": 0, 00:15:36.631 "state": "enabled", 00:15:36.631 "thread": "nvmf_tgt_poll_group_000" 00:15:36.631 } 00:15:36.631 ]' 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.631 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.888 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:02:NmRiNmIzMGU3Yzk5ZjlkMWM1YTcwODQ4OWJmMzk4MjI2YjQyNWExYzRiZDg2YzMwopoTbw==: --dhchap-ctrl-secret DHHC-1:01:NWIzODNmODFhMDMwMTkzYTNmZTk4MWE4ZWM0ZDBmZDbcaBt0: 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:37.821 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.080 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.647 00:15:38.647 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.647 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.647 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.214 { 00:15:39.214 "auth": { 00:15:39.214 "dhgroup": "ffdhe8192", 00:15:39.214 "digest": "sha512", 00:15:39.214 "state": "completed" 00:15:39.214 }, 00:15:39.214 "cntlid": 143, 00:15:39.214 "listen_address": { 00:15:39.214 "adrfam": "IPv4", 00:15:39.214 "traddr": "10.0.0.2", 00:15:39.214 "trsvcid": "4420", 00:15:39.214 "trtype": "TCP" 00:15:39.214 }, 00:15:39.214 "peer_address": { 00:15:39.214 "adrfam": "IPv4", 00:15:39.214 "traddr": "10.0.0.1", 00:15:39.214 "trsvcid": "36238", 00:15:39.214 "trtype": "TCP" 00:15:39.214 }, 00:15:39.214 "qid": 0, 00:15:39.214 "state": "enabled", 00:15:39.214 "thread": "nvmf_tgt_poll_group_000" 00:15:39.214 } 00:15:39.214 ]' 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.214 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.780 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:40.715 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:40.716 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:40.716 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.973 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.974 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.974 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.974 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.974 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.905 00:15:41.905 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.905 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.905 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.162 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.162 { 00:15:42.163 "auth": { 00:15:42.163 "dhgroup": "ffdhe8192", 00:15:42.163 "digest": "sha512", 00:15:42.163 "state": "completed" 00:15:42.163 }, 00:15:42.163 "cntlid": 145, 00:15:42.163 "listen_address": { 00:15:42.163 "adrfam": "IPv4", 00:15:42.163 "traddr": "10.0.0.2", 00:15:42.163 "trsvcid": "4420", 00:15:42.163 "trtype": "TCP" 00:15:42.163 }, 00:15:42.163 "peer_address": { 00:15:42.163 "adrfam": "IPv4", 00:15:42.163 "traddr": "10.0.0.1", 00:15:42.163 "trsvcid": "36260", 00:15:42.163 "trtype": "TCP" 00:15:42.163 }, 00:15:42.163 "qid": 0, 00:15:42.163 "state": "enabled", 00:15:42.163 "thread": "nvmf_tgt_poll_group_000" 00:15:42.163 } 00:15:42.163 ]' 00:15:42.163 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.163 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.163 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.421 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.421 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.421 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.421 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.421 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.678 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:00:YjdhNGExOGE4NzAzNWE3MzIzNTViNTdmN2Y0MWY4OTc1Yjk4ZjgzNjMwZTZiZWVl2VS1eQ==: --dhchap-ctrl-secret DHHC-1:03:OTQyODQzMDVkM2FmMmJkZDkwMGZiOGJmNTAwNDNiMGJiYzIzNmZhMjJhMWQxM2MzMWRiYzdiZWM0ZGZmNjkzMg0aE8U=: 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:43.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:44.179 2024/07/12 14:55:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:44.179 request: 00:15:44.179 { 00:15:44.180 "method": "bdev_nvme_attach_controller", 00:15:44.180 "params": { 00:15:44.180 "name": "nvme0", 00:15:44.180 "trtype": "tcp", 00:15:44.180 "traddr": "10.0.0.2", 00:15:44.180 "adrfam": "ipv4", 00:15:44.180 "trsvcid": "4420", 00:15:44.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:44.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:44.180 "prchk_reftag": false, 00:15:44.180 "prchk_guard": false, 00:15:44.180 "hdgst": false, 00:15:44.180 "ddgst": false, 00:15:44.180 "dhchap_key": "key2" 00:15:44.180 } 00:15:44.180 } 00:15:44.180 Got JSON-RPC error response 00:15:44.180 GoRPCClient: error on JSON-RPC call 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:44.180 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:44.744 2024/07/12 14:55:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:44.744 request: 00:15:44.744 { 00:15:44.744 "method": "bdev_nvme_attach_controller", 00:15:44.744 "params": { 00:15:44.744 "name": "nvme0", 00:15:44.744 "trtype": "tcp", 00:15:44.744 "traddr": "10.0.0.2", 00:15:44.744 "adrfam": "ipv4", 00:15:44.744 "trsvcid": "4420", 00:15:44.744 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:44.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:44.744 "prchk_reftag": false, 00:15:44.744 "prchk_guard": false, 00:15:44.744 "hdgst": false, 00:15:44.744 "ddgst": false, 00:15:44.744 "dhchap_key": "key1", 00:15:44.744 "dhchap_ctrlr_key": "ckey2" 00:15:44.744 } 00:15:44.744 } 00:15:44.745 Got JSON-RPC error response 00:15:44.745 GoRPCClient: error on JSON-RPC call 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key1 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.745 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.310 2024/07/12 14:55:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:45.310 request: 00:15:45.310 { 00:15:45.310 "method": "bdev_nvme_attach_controller", 00:15:45.310 "params": { 00:15:45.310 "name": "nvme0", 00:15:45.310 "trtype": "tcp", 00:15:45.310 "traddr": "10.0.0.2", 00:15:45.310 "adrfam": "ipv4", 00:15:45.310 "trsvcid": "4420", 00:15:45.310 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:45.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:45.311 "prchk_reftag": false, 00:15:45.311 "prchk_guard": false, 00:15:45.311 "hdgst": false, 00:15:45.311 "ddgst": false, 00:15:45.311 "dhchap_key": "key1", 00:15:45.311 "dhchap_ctrlr_key": "ckey1" 00:15:45.311 } 00:15:45.311 } 00:15:45.311 Got JSON-RPC error response 00:15:45.311 GoRPCClient: error on JSON-RPC call 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77848 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77848 ']' 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77848 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.311 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77848 00:15:45.568 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:45.568 killing process with pid 77848 00:15:45.568 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:45.568 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77848' 00:15:45.568 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77848 00:15:45.568 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77848 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82914 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82914 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82914 ']' 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.568 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82914 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82914 ']' 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.942 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.200 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.766 00:15:47.766 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.766 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.766 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.333 { 00:15:48.333 "auth": { 00:15:48.333 "dhgroup": "ffdhe8192", 00:15:48.333 "digest": "sha512", 00:15:48.333 "state": "completed" 00:15:48.333 }, 00:15:48.333 "cntlid": 1, 00:15:48.333 "listen_address": { 00:15:48.333 "adrfam": "IPv4", 00:15:48.333 "traddr": "10.0.0.2", 00:15:48.333 "trsvcid": "4420", 00:15:48.333 "trtype": "TCP" 00:15:48.333 }, 00:15:48.333 "peer_address": { 00:15:48.333 "adrfam": "IPv4", 00:15:48.333 "traddr": "10.0.0.1", 00:15:48.333 "trsvcid": "37318", 00:15:48.333 "trtype": "TCP" 00:15:48.333 }, 00:15:48.333 "qid": 0, 00:15:48.333 "state": "enabled", 00:15:48.333 "thread": "nvmf_tgt_poll_group_000" 00:15:48.333 } 00:15:48.333 ]' 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.333 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.334 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.592 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-secret DHHC-1:03:ZjNkNmM0M2ZlZmY4ZTcxYzI0MDhjMzg5Zjc1Y2M0Mzg4OWNkYTdjZTQ1MGYyMGMyNzkwZTViNDcyYTMzZDg2YzKX+Bs=: 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --dhchap-key key3 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:49.527 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.786 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.044 2024/07/12 14:55:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:50.044 request: 00:15:50.044 { 00:15:50.044 "method": "bdev_nvme_attach_controller", 00:15:50.044 "params": { 00:15:50.044 "name": "nvme0", 00:15:50.044 "trtype": "tcp", 00:15:50.044 "traddr": "10.0.0.2", 00:15:50.044 "adrfam": "ipv4", 00:15:50.044 "trsvcid": "4420", 00:15:50.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:50.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:50.044 "prchk_reftag": false, 00:15:50.044 "prchk_guard": false, 00:15:50.044 "hdgst": false, 00:15:50.044 "ddgst": false, 00:15:50.044 "dhchap_key": "key3" 00:15:50.045 } 00:15:50.045 } 00:15:50.045 Got JSON-RPC error response 00:15:50.045 GoRPCClient: error on JSON-RPC call 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:50.045 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.304 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.563 2024/07/12 14:55:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:50.563 request: 00:15:50.563 { 00:15:50.563 "method": "bdev_nvme_attach_controller", 00:15:50.563 "params": { 00:15:50.563 "name": "nvme0", 00:15:50.563 "trtype": "tcp", 00:15:50.563 "traddr": "10.0.0.2", 00:15:50.563 "adrfam": "ipv4", 00:15:50.563 "trsvcid": "4420", 00:15:50.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:50.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:50.563 "prchk_reftag": false, 00:15:50.563 "prchk_guard": false, 00:15:50.563 "hdgst": false, 00:15:50.563 "ddgst": false, 00:15:50.563 "dhchap_key": "key3" 00:15:50.563 } 00:15:50.563 } 00:15:50.563 Got JSON-RPC error response 00:15:50.563 GoRPCClient: error on JSON-RPC call 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:50.563 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:51.130 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:51.388 request: 00:15:51.389 { 00:15:51.389 "method": "bdev_nvme_attach_controller", 00:15:51.389 "params": { 00:15:51.389 "name": "nvme0", 00:15:51.389 "trtype": "tcp", 00:15:51.389 "traddr": "10.0.0.2", 00:15:51.389 "adrfam": "ipv4", 00:15:51.389 "trsvcid": "4420", 00:15:51.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:51.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c", 00:15:51.389 "prchk_reftag": false, 00:15:51.389 "prchk_guard": false, 00:15:51.389 "hdgst": false, 00:15:51.389 "ddgst": false, 00:15:51.389 "dhchap_key": "key0", 00:15:51.389 "dhchap_ctrlr_key": "key1" 00:15:51.389 } 00:15:51.389 } 00:15:51.389 Got JSON-RPC error response 00:15:51.389 GoRPCClient: error on JSON-RPC call 00:15:51.389 2024/07/12 14:55:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:51.389 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:51.647 00:15:51.647 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:51.647 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:51.647 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.905 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.905 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.905 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77893 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77893 ']' 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77893 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77893 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:52.472 killing process with pid 77893 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77893' 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77893 00:15:52.472 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77893 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.731 rmmod nvme_tcp 00:15:52.731 rmmod nvme_fabrics 00:15:52.731 rmmod nvme_keyring 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82914 ']' 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82914 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82914 ']' 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82914 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82914 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.731 killing process with pid 82914 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82914' 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82914 00:15:52.731 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82914 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gXX /tmp/spdk.key-sha256.UR1 /tmp/spdk.key-sha384.uw5 /tmp/spdk.key-sha512.oOG /tmp/spdk.key-sha512.rXY /tmp/spdk.key-sha384.eCO /tmp/spdk.key-sha256.UGi '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:52.990 00:15:52.990 real 3m9.402s 00:15:52.990 user 7m40.955s 00:15:52.990 sys 0m22.509s 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.990 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.990 ************************************ 00:15:52.990 END TEST nvmf_auth_target 00:15:52.990 ************************************ 00:15:52.990 14:55:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:52.990 14:55:31 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:15:52.990 14:55:31 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:52.990 14:55:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:52.990 14:55:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.990 14:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:52.990 ************************************ 00:15:52.990 START TEST nvmf_bdevio_no_huge 00:15:52.990 ************************************ 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:52.990 * Looking for test storage... 00:15:52.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.990 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:52.991 Cannot find device "nvmf_tgt_br" 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:52.991 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.249 Cannot find device "nvmf_tgt_br2" 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.249 Cannot find device "nvmf_tgt_br" 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.249 Cannot find device "nvmf_tgt_br2" 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:53.249 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:53.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:15:53.508 00:15:53.508 --- 10.0.0.2 ping statistics --- 00:15:53.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.508 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:53.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:53.508 00:15:53.508 --- 10.0.0.3 ping statistics --- 00:15:53.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.508 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:15:53.508 00:15:53.508 --- 10.0.0.1 ping statistics --- 00:15:53.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.508 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83322 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83322 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83322 ']' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.508 14:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 [2024-07-12 14:55:32.039038] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:15:53.508 [2024-07-12 14:55:32.039161] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:53.766 [2024-07-12 14:55:32.186343] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.766 [2024-07-12 14:55:32.308832] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.766 [2024-07-12 14:55:32.308901] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.766 [2024-07-12 14:55:32.308913] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.766 [2024-07-12 14:55:32.308922] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.766 [2024-07-12 14:55:32.308929] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.766 [2024-07-12 14:55:32.309030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:53.766 [2024-07-12 14:55:32.309125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:53.766 [2024-07-12 14:55:32.310300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:53.766 [2024-07-12 14:55:32.310312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 [2024-07-12 14:55:33.091725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 Malloc0 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:54.702 [2024-07-12 14:55:33.130040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:54.702 { 00:15:54.702 "params": { 00:15:54.702 "name": "Nvme$subsystem", 00:15:54.702 "trtype": "$TEST_TRANSPORT", 00:15:54.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.702 "adrfam": "ipv4", 00:15:54.702 "trsvcid": "$NVMF_PORT", 00:15:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.702 "hdgst": ${hdgst:-false}, 00:15:54.702 "ddgst": ${ddgst:-false} 00:15:54.702 }, 00:15:54.702 "method": "bdev_nvme_attach_controller" 00:15:54.702 } 00:15:54.702 EOF 00:15:54.702 )") 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:54.702 14:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:54.702 "params": { 00:15:54.702 "name": "Nvme1", 00:15:54.702 "trtype": "tcp", 00:15:54.702 "traddr": "10.0.0.2", 00:15:54.702 "adrfam": "ipv4", 00:15:54.702 "trsvcid": "4420", 00:15:54.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.702 "hdgst": false, 00:15:54.702 "ddgst": false 00:15:54.702 }, 00:15:54.702 "method": "bdev_nvme_attach_controller" 00:15:54.702 }' 00:15:54.702 [2024-07-12 14:55:33.192197] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:15:54.702 [2024-07-12 14:55:33.192307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83376 ] 00:15:54.702 [2024-07-12 14:55:33.336685] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.960 [2024-07-12 14:55:33.474468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.960 [2024-07-12 14:55:33.474598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.960 [2024-07-12 14:55:33.474603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.219 I/O targets: 00:15:55.219 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:55.219 00:15:55.219 00:15:55.219 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.219 http://cunit.sourceforge.net/ 00:15:55.219 00:15:55.219 00:15:55.219 Suite: bdevio tests on: Nvme1n1 00:15:55.219 Test: blockdev write read block ...passed 00:15:55.219 Test: blockdev write zeroes read block ...passed 00:15:55.219 Test: blockdev write zeroes read no split ...passed 00:15:55.219 Test: blockdev write zeroes read split ...passed 00:15:55.219 Test: blockdev write zeroes read split partial ...passed 00:15:55.219 Test: blockdev reset ...[2024-07-12 14:55:33.806078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:55.219 [2024-07-12 14:55:33.806219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdf600 (9): Bad file descriptor 00:15:55.219 [2024-07-12 14:55:33.822814] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:55.219 passed 00:15:55.219 Test: blockdev write read 8 blocks ...passed 00:15:55.219 Test: blockdev write read size > 128k ...passed 00:15:55.219 Test: blockdev write read invalid size ...passed 00:15:55.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.219 Test: blockdev write read max offset ...passed 00:15:55.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.477 Test: blockdev writev readv 8 blocks ...passed 00:15:55.477 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.477 Test: blockdev writev readv block ...passed 00:15:55.477 Test: blockdev writev readv size > 128k ...passed 00:15:55.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.477 Test: blockdev comparev and writev ...[2024-07-12 14:55:33.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:33.999020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:33.999042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:33.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:33.999602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:33.999630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:33.999649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:33.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.000125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:34.000162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.000182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:34.000193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.000772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:34.000801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.000820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:55.477 [2024-07-12 14:55:34.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:55.477 passed 00:15:55.477 Test: blockdev nvme passthru rw ...passed 00:15:55.477 Test: blockdev nvme passthru vendor specific ...[2024-07-12 14:55:34.083989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.477 [2024-07-12 14:55:34.084041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.084184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.477 [2024-07-12 14:55:34.084211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.084331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.477 [2024-07-12 14:55:34.084348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:55.477 [2024-07-12 14:55:34.084460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:55.477 [2024-07-12 14:55:34.084487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:55.477 passed 00:15:55.477 Test: blockdev nvme admin passthru ...passed 00:15:55.735 Test: blockdev copy ...passed 00:15:55.735 00:15:55.735 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.735 suites 1 1 n/a 0 0 00:15:55.735 tests 23 23 23 0 0 00:15:55.735 asserts 152 152 152 0 n/a 00:15:55.735 00:15:55.735 Elapsed time = 0.925 seconds 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.994 rmmod nvme_tcp 00:15:55.994 rmmod nvme_fabrics 00:15:55.994 rmmod nvme_keyring 00:15:55.994 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83322 ']' 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83322 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83322 ']' 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83322 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83322 00:15:56.251 killing process with pid 83322 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83322' 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83322 00:15:56.251 14:55:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83322 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.509 00:15:56.509 real 0m3.581s 00:15:56.509 user 0m12.953s 00:15:56.509 sys 0m1.320s 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.509 14:55:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.509 ************************************ 00:15:56.509 END TEST nvmf_bdevio_no_huge 00:15:56.509 ************************************ 00:15:56.509 14:55:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:56.509 14:55:35 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.509 14:55:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:56.509 14:55:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.509 14:55:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.509 ************************************ 00:15:56.509 START TEST nvmf_tls 00:15:56.509 ************************************ 00:15:56.509 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.787 * Looking for test storage... 00:15:56.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.787 Cannot find device "nvmf_tgt_br" 00:15:56.787 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.788 Cannot find device "nvmf_tgt_br2" 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.788 Cannot find device "nvmf_tgt_br" 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.788 Cannot find device "nvmf_tgt_br2" 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.788 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.048 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:57.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:15:57.049 00:15:57.049 --- 10.0.0.2 ping statistics --- 00:15:57.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.049 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:57.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:57.049 00:15:57.049 --- 10.0.0.3 ping statistics --- 00:15:57.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.049 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:57.049 00:15:57.049 --- 10.0.0.1 ping statistics --- 00:15:57.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.049 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83567 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83567 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83567 ']' 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.049 14:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.049 [2024-07-12 14:55:35.638187] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:15:57.049 [2024-07-12 14:55:35.638652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.307 [2024-07-12 14:55:35.783598] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.307 [2024-07-12 14:55:35.872443] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.307 [2024-07-12 14:55:35.872533] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.307 [2024-07-12 14:55:35.872552] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.307 [2024-07-12 14:55:35.872565] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.307 [2024-07-12 14:55:35.872576] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.307 [2024-07-12 14:55:35.872618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:58.241 14:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:58.500 true 00:15:58.500 14:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.500 14:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:58.768 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:58.768 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:58.768 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:59.026 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.026 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:59.284 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:59.284 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:59.284 14:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:59.544 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.544 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:59.801 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:59.801 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:59.801 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:59.801 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.368 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:00.368 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:00.368 14:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:00.368 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.368 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:00.935 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:00.935 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:00.935 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:01.194 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.194 14:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:01.452 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tPtBLjTxoJ 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.SpkQznurBD 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tPtBLjTxoJ 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SpkQznurBD 00:16:01.710 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:01.968 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:02.226 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tPtBLjTxoJ 00:16:02.226 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tPtBLjTxoJ 00:16:02.226 14:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:02.484 [2024-07-12 14:55:41.090925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.484 14:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:02.742 14:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:03.001 [2024-07-12 14:55:41.591019] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:03.001 [2024-07-12 14:55:41.591257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.001 14:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:03.567 malloc0 00:16:03.567 14:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:03.567 14:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tPtBLjTxoJ 00:16:03.826 [2024-07-12 14:55:42.438252] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:03.826 14:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tPtBLjTxoJ 00:16:16.042 Initializing NVMe Controllers 00:16:16.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:16.042 Initialization complete. Launching workers. 00:16:16.042 ======================================================== 00:16:16.042 Latency(us) 00:16:16.042 Device Information : IOPS MiB/s Average min max 00:16:16.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8723.00 34.07 7340.20 2046.28 13182.65 00:16:16.042 ======================================================== 00:16:16.042 Total : 8723.00 34.07 7340.20 2046.28 13182.65 00:16:16.042 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tPtBLjTxoJ 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tPtBLjTxoJ' 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83940 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83940 /var/tmp/bdevperf.sock 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83940 ']' 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.042 14:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.042 [2024-07-12 14:55:52.722055] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:16.042 [2024-07-12 14:55:52.722191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83940 ] 00:16:16.042 [2024-07-12 14:55:52.861966] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.042 [2024-07-12 14:55:52.927974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.042 14:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.042 14:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:16.042 14:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tPtBLjTxoJ 00:16:16.042 [2024-07-12 14:55:53.943325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:16.042 [2024-07-12 14:55:53.943872] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:16.042 TLSTESTn1 00:16:16.042 14:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:16.042 Running I/O for 10 seconds... 00:16:26.002 00:16:26.002 Latency(us) 00:16:26.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.002 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:26.002 Verification LBA range: start 0x0 length 0x2000 00:16:26.002 TLSTESTn1 : 10.02 3380.94 13.21 0.00 0.00 37789.16 7328.12 36461.85 00:16:26.002 =================================================================================================================== 00:16:26.003 Total : 3380.94 13.21 0.00 0.00 37789.16 7328.12 36461.85 00:16:26.003 0 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83940 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83940 ']' 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83940 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83940 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:26.003 killing process with pid 83940 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83940' 00:16:26.003 Received shutdown signal, test time was about 10.000000 seconds 00:16:26.003 00:16:26.003 Latency(us) 00:16:26.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.003 =================================================================================================================== 00:16:26.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83940 00:16:26.003 [2024-07-12 14:56:04.260620] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83940 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SpkQznurBD 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SpkQznurBD 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SpkQznurBD 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SpkQznurBD' 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84081 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84081 /var/tmp/bdevperf.sock 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84081 ']' 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.003 14:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.003 [2024-07-12 14:56:04.479319] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:26.003 [2024-07-12 14:56:04.479411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84081 ] 00:16:26.003 [2024-07-12 14:56:04.609224] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.260 [2024-07-12 14:56:04.681914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SpkQznurBD 00:16:27.195 [2024-07-12 14:56:05.771142] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.195 [2024-07-12 14:56:05.771683] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:27.195 [2024-07-12 14:56:05.782343] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:27.195 [2024-07-12 14:56:05.782821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc82e50 (107): Transport endpoint is not connected 00:16:27.195 [2024-07-12 14:56:05.783799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc82e50 (9): Bad file descriptor 00:16:27.195 [2024-07-12 14:56:05.784796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:27.195 [2024-07-12 14:56:05.784930] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:27.195 [2024-07-12 14:56:05.785026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:27.195 2024/07/12 14:56:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.SpkQznurBD subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:27.195 request: 00:16:27.195 { 00:16:27.195 "method": "bdev_nvme_attach_controller", 00:16:27.195 "params": { 00:16:27.195 "name": "TLSTEST", 00:16:27.195 "trtype": "tcp", 00:16:27.195 "traddr": "10.0.0.2", 00:16:27.195 "adrfam": "ipv4", 00:16:27.195 "trsvcid": "4420", 00:16:27.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.195 "prchk_reftag": false, 00:16:27.195 "prchk_guard": false, 00:16:27.195 "hdgst": false, 00:16:27.195 "ddgst": false, 00:16:27.195 "psk": "/tmp/tmp.SpkQznurBD" 00:16:27.195 } 00:16:27.195 } 00:16:27.195 Got JSON-RPC error response 00:16:27.195 GoRPCClient: error on JSON-RPC call 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84081 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84081 ']' 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84081 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84081 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:27.195 killing process with pid 84081 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84081' 00:16:27.195 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.195 00:16:27.195 Latency(us) 00:16:27.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.195 =================================================================================================================== 00:16:27.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84081 00:16:27.195 [2024-07-12 14:56:05.829352] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:27.195 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84081 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tPtBLjTxoJ 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tPtBLjTxoJ 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tPtBLjTxoJ 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tPtBLjTxoJ' 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84131 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84131 /var/tmp/bdevperf.sock 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84131 ']' 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.454 14:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.454 [2024-07-12 14:56:06.047892] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:27.454 [2024-07-12 14:56:06.047989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84131 ] 00:16:27.712 [2024-07-12 14:56:06.186461] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.713 [2024-07-12 14:56:06.252365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.647 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.647 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:28.647 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tPtBLjTxoJ 00:16:28.905 [2024-07-12 14:56:07.354152] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:28.905 [2024-07-12 14:56:07.354756] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:28.905 [2024-07-12 14:56:07.362438] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:28.905 [2024-07-12 14:56:07.362494] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:28.905 [2024-07-12 14:56:07.362581] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.905 [2024-07-12 14:56:07.363042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1377e50 (107): Transport endpoint is not connected 00:16:28.905 [2024-07-12 14:56:07.364012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1377e50 (9): Bad file descriptor 00:16:28.905 [2024-07-12 14:56:07.365007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.905 [2024-07-12 14:56:07.365114] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:28.905 [2024-07-12 14:56:07.365200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.905 2024/07/12 14:56:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.tPtBLjTxoJ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:28.905 request: 00:16:28.905 { 00:16:28.905 "method": "bdev_nvme_attach_controller", 00:16:28.905 "params": { 00:16:28.905 "name": "TLSTEST", 00:16:28.905 "trtype": "tcp", 00:16:28.905 "traddr": "10.0.0.2", 00:16:28.905 "adrfam": "ipv4", 00:16:28.905 "trsvcid": "4420", 00:16:28.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:28.905 "prchk_reftag": false, 00:16:28.905 "prchk_guard": false, 00:16:28.905 "hdgst": false, 00:16:28.905 "ddgst": false, 00:16:28.905 "psk": "/tmp/tmp.tPtBLjTxoJ" 00:16:28.905 } 00:16:28.905 } 00:16:28.905 Got JSON-RPC error response 00:16:28.905 GoRPCClient: error on JSON-RPC call 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84131 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84131 ']' 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84131 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84131 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:28.905 killing process with pid 84131 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84131' 00:16:28.905 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.905 00:16:28.905 Latency(us) 00:16:28.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.905 =================================================================================================================== 00:16:28.905 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84131 00:16:28.905 [2024-07-12 14:56:07.418211] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:28.905 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84131 00:16:29.163 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tPtBLjTxoJ 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tPtBLjTxoJ 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tPtBLjTxoJ 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tPtBLjTxoJ' 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84172 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84172 /var/tmp/bdevperf.sock 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84172 ']' 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.164 14:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.164 [2024-07-12 14:56:07.638445] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:29.164 [2024-07-12 14:56:07.638578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84172 ] 00:16:29.164 [2024-07-12 14:56:07.771072] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.422 [2024-07-12 14:56:07.829921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.355 14:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.355 14:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:30.355 14:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tPtBLjTxoJ 00:16:30.355 [2024-07-12 14:56:08.968361] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.355 [2024-07-12 14:56:08.968473] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:30.355 [2024-07-12 14:56:08.973573] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:30.355 [2024-07-12 14:56:08.973614] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:30.355 [2024-07-12 14:56:08.973669] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:30.355 [2024-07-12 14:56:08.974263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ede50 (107): Transport endpoint is not connected 00:16:30.355 [2024-07-12 14:56:08.975251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ede50 (9): Bad file descriptor 00:16:30.355 [2024-07-12 14:56:08.976246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:30.355 [2024-07-12 14:56:08.976269] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:30.355 [2024-07-12 14:56:08.976283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:30.355 2024/07/12 14:56:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.tPtBLjTxoJ subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:30.355 request: 00:16:30.355 { 00:16:30.355 "method": "bdev_nvme_attach_controller", 00:16:30.355 "params": { 00:16:30.355 "name": "TLSTEST", 00:16:30.355 "trtype": "tcp", 00:16:30.355 "traddr": "10.0.0.2", 00:16:30.355 "adrfam": "ipv4", 00:16:30.355 "trsvcid": "4420", 00:16:30.355 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.355 "prchk_reftag": false, 00:16:30.355 "prchk_guard": false, 00:16:30.355 "hdgst": false, 00:16:30.355 "ddgst": false, 00:16:30.355 "psk": "/tmp/tmp.tPtBLjTxoJ" 00:16:30.355 } 00:16:30.355 } 00:16:30.355 Got JSON-RPC error response 00:16:30.355 GoRPCClient: error on JSON-RPC call 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84172 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84172 ']' 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84172 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.355 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84172 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:30.614 killing process with pid 84172 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84172' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84172 00:16:30.614 Received shutdown signal, test time was about 10.000000 seconds 00:16:30.614 00:16:30.614 Latency(us) 00:16:30.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.614 =================================================================================================================== 00:16:30.614 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:30.614 [2024-07-12 14:56:09.023237] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84172 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84218 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84218 /var/tmp/bdevperf.sock 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84218 ']' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.614 14:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.614 [2024-07-12 14:56:09.257470] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:30.614 [2024-07-12 14:56:09.257638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84218 ] 00:16:30.873 [2024-07-12 14:56:09.402130] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.873 [2024-07-12 14:56:09.461244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.809 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.809 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:31.809 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:31.809 [2024-07-12 14:56:10.443129] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:31.809 [2024-07-12 14:56:10.445206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ac3e0 (9): Bad file descriptor 00:16:31.809 [2024-07-12 14:56:10.446198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:31.809 [2024-07-12 14:56:10.446220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:31.809 [2024-07-12 14:56:10.446235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.809 2024/07/12 14:56:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:31.809 request: 00:16:31.809 { 00:16:31.809 "method": "bdev_nvme_attach_controller", 00:16:31.809 "params": { 00:16:31.809 "name": "TLSTEST", 00:16:31.809 "trtype": "tcp", 00:16:31.809 "traddr": "10.0.0.2", 00:16:31.809 "adrfam": "ipv4", 00:16:31.809 "trsvcid": "4420", 00:16:31.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.809 "prchk_reftag": false, 00:16:31.809 "prchk_guard": false, 00:16:31.809 "hdgst": false, 00:16:31.809 "ddgst": false 00:16:31.809 } 00:16:31.809 } 00:16:31.809 Got JSON-RPC error response 00:16:31.809 GoRPCClient: error on JSON-RPC call 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84218 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84218 ']' 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84218 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84218 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:32.067 killing process with pid 84218 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84218' 00:16:32.067 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84218 00:16:32.067 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.067 00:16:32.068 Latency(us) 00:16:32.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.068 =================================================================================================================== 00:16:32.068 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84218 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83567 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83567 ']' 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83567 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83567 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.068 killing process with pid 83567 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83567' 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83567 00:16:32.068 [2024-07-12 14:56:10.672351] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:32.068 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83567 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ZbN0wLiCj5 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ZbN0wLiCj5 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84279 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84279 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84279 ']' 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.327 14:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.327 [2024-07-12 14:56:10.968900] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:32.327 [2024-07-12 14:56:10.969009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.585 [2024-07-12 14:56:11.104191] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.585 [2024-07-12 14:56:11.163027] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.585 [2024-07-12 14:56:11.163082] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.585 [2024-07-12 14:56:11.163093] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.585 [2024-07-12 14:56:11.163101] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.585 [2024-07-12 14:56:11.163109] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.585 [2024-07-12 14:56:11.163139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZbN0wLiCj5 00:16:33.519 14:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:33.519 [2024-07-12 14:56:12.170994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.778 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:34.036 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:34.036 [2024-07-12 14:56:12.659068] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.036 [2024-07-12 14:56:12.659291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.036 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:34.294 malloc0 00:16:34.294 14:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:34.861 [2024-07-12 14:56:13.458092] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:34.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZbN0wLiCj5 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZbN0wLiCj5' 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84376 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84376 /var/tmp/bdevperf.sock 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84376 ']' 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.861 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.129 [2024-07-12 14:56:13.526012] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:35.129 [2024-07-12 14:56:13.526129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84376 ] 00:16:35.129 [2024-07-12 14:56:13.657363] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.129 [2024-07-12 14:56:13.717789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.397 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.397 14:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:35.397 14:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:35.656 [2024-07-12 14:56:14.070491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.656 [2024-07-12 14:56:14.070648] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:35.656 TLSTESTn1 00:16:35.656 14:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:35.656 Running I/O for 10 seconds... 00:16:47.851 00:16:47.851 Latency(us) 00:16:47.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:47.851 Verification LBA range: start 0x0 length 0x2000 00:16:47.851 TLSTESTn1 : 10.02 3600.12 14.06 0.00 0.00 35485.99 6881.28 41466.41 00:16:47.851 =================================================================================================================== 00:16:47.851 Total : 3600.12 14.06 0.00 0.00 35485.99 6881.28 41466.41 00:16:47.851 0 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84376 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84376 ']' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84376 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84376 00:16:47.851 killing process with pid 84376 00:16:47.851 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.851 00:16:47.851 Latency(us) 00:16:47.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.851 =================================================================================================================== 00:16:47.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84376' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84376 00:16:47.851 [2024-07-12 14:56:24.345007] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84376 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ZbN0wLiCj5 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZbN0wLiCj5 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZbN0wLiCj5 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:47.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZbN0wLiCj5 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZbN0wLiCj5' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84510 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84510 /var/tmp/bdevperf.sock 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84510 ']' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.851 14:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.851 [2024-07-12 14:56:24.567983] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:47.851 [2024-07-12 14:56:24.568336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84510 ] 00:16:47.851 [2024-07-12 14:56:24.703697] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.851 [2024-07-12 14:56:24.810243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:47.851 [2024-07-12 14:56:25.909994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.851 [2024-07-12 14:56:25.910066] bdev_nvme.c:6133:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:47.851 [2024-07-12 14:56:25.910077] bdev_nvme.c:6238:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ZbN0wLiCj5 00:16:47.851 2024/07/12 14:56:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.ZbN0wLiCj5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:47.851 request: 00:16:47.851 { 00:16:47.851 "method": "bdev_nvme_attach_controller", 00:16:47.851 "params": { 00:16:47.851 "name": "TLSTEST", 00:16:47.851 "trtype": "tcp", 00:16:47.851 "traddr": "10.0.0.2", 00:16:47.851 "adrfam": "ipv4", 00:16:47.851 "trsvcid": "4420", 00:16:47.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.851 "prchk_reftag": false, 00:16:47.851 "prchk_guard": false, 00:16:47.851 "hdgst": false, 00:16:47.851 "ddgst": false, 00:16:47.851 "psk": "/tmp/tmp.ZbN0wLiCj5" 00:16:47.851 } 00:16:47.851 } 00:16:47.851 Got JSON-RPC error response 00:16:47.851 GoRPCClient: error on JSON-RPC call 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84510 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84510 ']' 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84510 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84510 00:16:47.851 killing process with pid 84510 00:16:47.851 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.851 00:16:47.851 Latency(us) 00:16:47.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.851 =================================================================================================================== 00:16:47.851 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84510' 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84510 00:16:47.851 14:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84510 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84279 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84279 ']' 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84279 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84279 00:16:47.851 killing process with pid 84279 00:16:47.851 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84279' 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84279 00:16:47.852 [2024-07-12 14:56:26.130145] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84279 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84561 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84561 00:16:47.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84561 ']' 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.852 14:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 [2024-07-12 14:56:26.351102] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:47.852 [2024-07-12 14:56:26.351197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.852 [2024-07-12 14:56:26.485264] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.110 [2024-07-12 14:56:26.543996] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.110 [2024-07-12 14:56:26.544051] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.110 [2024-07-12 14:56:26.544073] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.110 [2024-07-12 14:56:26.544081] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.110 [2024-07-12 14:56:26.544090] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.110 [2024-07-12 14:56:26.544115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZbN0wLiCj5 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.043 [2024-07-12 14:56:27.635935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.043 14:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:49.301 14:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:49.559 [2024-07-12 14:56:28.152048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.559 [2024-07-12 14:56:28.152269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.559 14:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:49.816 malloc0 00:16:50.074 14:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:50.331 14:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:50.589 [2024-07-12 14:56:29.039027] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:50.589 [2024-07-12 14:56:29.039082] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:50.589 [2024-07-12 14:56:29.039117] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:50.589 2024/07/12 14:56:29 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.ZbN0wLiCj5], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:50.589 request: 00:16:50.589 { 00:16:50.589 "method": "nvmf_subsystem_add_host", 00:16:50.589 "params": { 00:16:50.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.589 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.589 "psk": "/tmp/tmp.ZbN0wLiCj5" 00:16:50.589 } 00:16:50.589 } 00:16:50.589 Got JSON-RPC error response 00:16:50.589 GoRPCClient: error on JSON-RPC call 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84561 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84561 ']' 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84561 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84561 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:50.589 killing process with pid 84561 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84561' 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84561 00:16:50.589 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84561 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ZbN0wLiCj5 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84677 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84677 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84677 ']' 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.863 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.863 [2024-07-12 14:56:29.330639] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:50.863 [2024-07-12 14:56:29.330747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.863 [2024-07-12 14:56:29.464207] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.143 [2024-07-12 14:56:29.524469] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.143 [2024-07-12 14:56:29.524550] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.143 [2024-07-12 14:56:29.524563] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.143 [2024-07-12 14:56:29.524572] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.143 [2024-07-12 14:56:29.524580] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.143 [2024-07-12 14:56:29.524607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZbN0wLiCj5 00:16:51.143 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:51.400 [2024-07-12 14:56:29.881737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.400 14:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:51.658 14:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:51.916 [2024-07-12 14:56:30.397891] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.916 [2024-07-12 14:56:30.398179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.916 14:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:52.175 malloc0 00:16:52.175 14:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:52.433 14:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:52.691 [2024-07-12 14:56:31.192889] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84766 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84766 /var/tmp/bdevperf.sock 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84766 ']' 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.691 14:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.691 [2024-07-12 14:56:31.292179] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:52.691 [2024-07-12 14:56:31.292874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84766 ] 00:16:52.949 [2024-07-12 14:56:31.430085] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.949 [2024-07-12 14:56:31.491293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.882 14:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.882 14:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:53.882 14:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:16:54.140 [2024-07-12 14:56:32.564916] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.140 [2024-07-12 14:56:32.565702] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:54.140 TLSTESTn1 00:16:54.140 14:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:54.398 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:54.398 "subsystems": [ 00:16:54.398 { 00:16:54.398 "subsystem": "keyring", 00:16:54.398 "config": [] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "iobuf", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "iobuf_set_options", 00:16:54.398 "params": { 00:16:54.398 "large_bufsize": 135168, 00:16:54.398 "large_pool_count": 1024, 00:16:54.398 "small_bufsize": 8192, 00:16:54.398 "small_pool_count": 8192 00:16:54.398 } 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "sock", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "sock_set_default_impl", 00:16:54.398 "params": { 00:16:54.398 "impl_name": "posix" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "sock_impl_set_options", 00:16:54.398 "params": { 00:16:54.398 "enable_ktls": false, 00:16:54.398 "enable_placement_id": 0, 00:16:54.398 "enable_quickack": false, 00:16:54.398 "enable_recv_pipe": true, 00:16:54.398 "enable_zerocopy_send_client": false, 00:16:54.398 "enable_zerocopy_send_server": true, 00:16:54.398 "impl_name": "ssl", 00:16:54.398 "recv_buf_size": 4096, 00:16:54.398 "send_buf_size": 4096, 00:16:54.398 "tls_version": 0, 00:16:54.398 "zerocopy_threshold": 0 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "sock_impl_set_options", 00:16:54.398 "params": { 00:16:54.398 "enable_ktls": false, 00:16:54.398 "enable_placement_id": 0, 00:16:54.398 "enable_quickack": false, 00:16:54.398 "enable_recv_pipe": true, 00:16:54.398 "enable_zerocopy_send_client": false, 00:16:54.398 "enable_zerocopy_send_server": true, 00:16:54.398 "impl_name": "posix", 00:16:54.398 "recv_buf_size": 2097152, 00:16:54.398 "send_buf_size": 2097152, 00:16:54.398 "tls_version": 0, 00:16:54.398 "zerocopy_threshold": 0 00:16:54.398 } 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "vmd", 00:16:54.398 "config": [] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "accel", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "accel_set_options", 00:16:54.398 "params": { 00:16:54.398 "buf_count": 2048, 00:16:54.398 "large_cache_size": 16, 00:16:54.398 "sequence_count": 2048, 00:16:54.398 "small_cache_size": 128, 00:16:54.398 "task_count": 2048 00:16:54.398 } 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "bdev", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "bdev_set_options", 00:16:54.398 "params": { 00:16:54.398 "bdev_auto_examine": true, 00:16:54.398 "bdev_io_cache_size": 256, 00:16:54.398 "bdev_io_pool_size": 65535, 00:16:54.398 "iobuf_large_cache_size": 16, 00:16:54.398 "iobuf_small_cache_size": 128 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_raid_set_options", 00:16:54.398 "params": { 00:16:54.398 "process_window_size_kb": 1024 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_iscsi_set_options", 00:16:54.398 "params": { 00:16:54.398 "timeout_sec": 30 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_nvme_set_options", 00:16:54.398 "params": { 00:16:54.398 "action_on_timeout": "none", 00:16:54.398 "allow_accel_sequence": false, 00:16:54.398 "arbitration_burst": 0, 00:16:54.398 "bdev_retry_count": 3, 00:16:54.398 "ctrlr_loss_timeout_sec": 0, 00:16:54.398 "delay_cmd_submit": true, 00:16:54.398 "dhchap_dhgroups": [ 00:16:54.398 "null", 00:16:54.398 "ffdhe2048", 00:16:54.398 "ffdhe3072", 00:16:54.398 "ffdhe4096", 00:16:54.398 "ffdhe6144", 00:16:54.398 "ffdhe8192" 00:16:54.398 ], 00:16:54.398 "dhchap_digests": [ 00:16:54.398 "sha256", 00:16:54.398 "sha384", 00:16:54.398 "sha512" 00:16:54.398 ], 00:16:54.398 "disable_auto_failback": false, 00:16:54.398 "fast_io_fail_timeout_sec": 0, 00:16:54.398 "generate_uuids": false, 00:16:54.398 "high_priority_weight": 0, 00:16:54.398 "io_path_stat": false, 00:16:54.398 "io_queue_requests": 0, 00:16:54.398 "keep_alive_timeout_ms": 10000, 00:16:54.398 "low_priority_weight": 0, 00:16:54.398 "medium_priority_weight": 0, 00:16:54.398 "nvme_adminq_poll_period_us": 10000, 00:16:54.398 "nvme_error_stat": false, 00:16:54.398 "nvme_ioq_poll_period_us": 0, 00:16:54.398 "rdma_cm_event_timeout_ms": 0, 00:16:54.398 "rdma_max_cq_size": 0, 00:16:54.398 "rdma_srq_size": 0, 00:16:54.398 "reconnect_delay_sec": 0, 00:16:54.398 "timeout_admin_us": 0, 00:16:54.398 "timeout_us": 0, 00:16:54.398 "transport_ack_timeout": 0, 00:16:54.398 "transport_retry_count": 4, 00:16:54.398 "transport_tos": 0 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_nvme_set_hotplug", 00:16:54.398 "params": { 00:16:54.398 "enable": false, 00:16:54.398 "period_us": 100000 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_malloc_create", 00:16:54.398 "params": { 00:16:54.398 "block_size": 4096, 00:16:54.398 "name": "malloc0", 00:16:54.398 "num_blocks": 8192, 00:16:54.398 "optimal_io_boundary": 0, 00:16:54.398 "physical_block_size": 4096, 00:16:54.398 "uuid": "328e7023-01dc-4c1d-926a-1575d53880c2" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "bdev_wait_for_examine" 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "nbd", 00:16:54.398 "config": [] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "scheduler", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "framework_set_scheduler", 00:16:54.398 "params": { 00:16:54.398 "name": "static" 00:16:54.398 } 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "subsystem": "nvmf", 00:16:54.398 "config": [ 00:16:54.398 { 00:16:54.398 "method": "nvmf_set_config", 00:16:54.398 "params": { 00:16:54.398 "admin_cmd_passthru": { 00:16:54.398 "identify_ctrlr": false 00:16:54.398 }, 00:16:54.398 "discovery_filter": "match_any" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_set_max_subsystems", 00:16:54.398 "params": { 00:16:54.398 "max_subsystems": 1024 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_set_crdt", 00:16:54.398 "params": { 00:16:54.398 "crdt1": 0, 00:16:54.398 "crdt2": 0, 00:16:54.398 "crdt3": 0 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_create_transport", 00:16:54.398 "params": { 00:16:54.398 "abort_timeout_sec": 1, 00:16:54.398 "ack_timeout": 0, 00:16:54.398 "buf_cache_size": 4294967295, 00:16:54.398 "c2h_success": false, 00:16:54.398 "data_wr_pool_size": 0, 00:16:54.398 "dif_insert_or_strip": false, 00:16:54.398 "in_capsule_data_size": 4096, 00:16:54.398 "io_unit_size": 131072, 00:16:54.398 "max_aq_depth": 128, 00:16:54.398 "max_io_qpairs_per_ctrlr": 127, 00:16:54.398 "max_io_size": 131072, 00:16:54.398 "max_queue_depth": 128, 00:16:54.398 "num_shared_buffers": 511, 00:16:54.398 "sock_priority": 0, 00:16:54.398 "trtype": "TCP", 00:16:54.398 "zcopy": false 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_create_subsystem", 00:16:54.398 "params": { 00:16:54.398 "allow_any_host": false, 00:16:54.398 "ana_reporting": false, 00:16:54.398 "max_cntlid": 65519, 00:16:54.398 "max_namespaces": 10, 00:16:54.398 "min_cntlid": 1, 00:16:54.398 "model_number": "SPDK bdev Controller", 00:16:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.398 "serial_number": "SPDK00000000000001" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_subsystem_add_host", 00:16:54.398 "params": { 00:16:54.398 "host": "nqn.2016-06.io.spdk:host1", 00:16:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.398 "psk": "/tmp/tmp.ZbN0wLiCj5" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_subsystem_add_ns", 00:16:54.398 "params": { 00:16:54.398 "namespace": { 00:16:54.398 "bdev_name": "malloc0", 00:16:54.398 "nguid": "328E702301DC4C1D926A1575D53880C2", 00:16:54.398 "no_auto_visible": false, 00:16:54.398 "nsid": 1, 00:16:54.398 "uuid": "328e7023-01dc-4c1d-926a-1575d53880c2" 00:16:54.398 }, 00:16:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:54.398 } 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "method": "nvmf_subsystem_add_listener", 00:16:54.398 "params": { 00:16:54.398 "listen_address": { 00:16:54.398 "adrfam": "IPv4", 00:16:54.398 "traddr": "10.0.0.2", 00:16:54.398 "trsvcid": "4420", 00:16:54.398 "trtype": "TCP" 00:16:54.398 }, 00:16:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.398 "secure_channel": true 00:16:54.398 } 00:16:54.399 } 00:16:54.399 ] 00:16:54.399 } 00:16:54.399 ] 00:16:54.399 }' 00:16:54.399 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:54.964 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:54.964 "subsystems": [ 00:16:54.964 { 00:16:54.964 "subsystem": "keyring", 00:16:54.964 "config": [] 00:16:54.964 }, 00:16:54.964 { 00:16:54.964 "subsystem": "iobuf", 00:16:54.964 "config": [ 00:16:54.964 { 00:16:54.964 "method": "iobuf_set_options", 00:16:54.964 "params": { 00:16:54.964 "large_bufsize": 135168, 00:16:54.964 "large_pool_count": 1024, 00:16:54.964 "small_bufsize": 8192, 00:16:54.965 "small_pool_count": 8192 00:16:54.965 } 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "subsystem": "sock", 00:16:54.965 "config": [ 00:16:54.965 { 00:16:54.965 "method": "sock_set_default_impl", 00:16:54.965 "params": { 00:16:54.965 "impl_name": "posix" 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "sock_impl_set_options", 00:16:54.965 "params": { 00:16:54.965 "enable_ktls": false, 00:16:54.965 "enable_placement_id": 0, 00:16:54.965 "enable_quickack": false, 00:16:54.965 "enable_recv_pipe": true, 00:16:54.965 "enable_zerocopy_send_client": false, 00:16:54.965 "enable_zerocopy_send_server": true, 00:16:54.965 "impl_name": "ssl", 00:16:54.965 "recv_buf_size": 4096, 00:16:54.965 "send_buf_size": 4096, 00:16:54.965 "tls_version": 0, 00:16:54.965 "zerocopy_threshold": 0 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "sock_impl_set_options", 00:16:54.965 "params": { 00:16:54.965 "enable_ktls": false, 00:16:54.965 "enable_placement_id": 0, 00:16:54.965 "enable_quickack": false, 00:16:54.965 "enable_recv_pipe": true, 00:16:54.965 "enable_zerocopy_send_client": false, 00:16:54.965 "enable_zerocopy_send_server": true, 00:16:54.965 "impl_name": "posix", 00:16:54.965 "recv_buf_size": 2097152, 00:16:54.965 "send_buf_size": 2097152, 00:16:54.965 "tls_version": 0, 00:16:54.965 "zerocopy_threshold": 0 00:16:54.965 } 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "subsystem": "vmd", 00:16:54.965 "config": [] 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "subsystem": "accel", 00:16:54.965 "config": [ 00:16:54.965 { 00:16:54.965 "method": "accel_set_options", 00:16:54.965 "params": { 00:16:54.965 "buf_count": 2048, 00:16:54.965 "large_cache_size": 16, 00:16:54.965 "sequence_count": 2048, 00:16:54.965 "small_cache_size": 128, 00:16:54.965 "task_count": 2048 00:16:54.965 } 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "subsystem": "bdev", 00:16:54.965 "config": [ 00:16:54.965 { 00:16:54.965 "method": "bdev_set_options", 00:16:54.965 "params": { 00:16:54.965 "bdev_auto_examine": true, 00:16:54.965 "bdev_io_cache_size": 256, 00:16:54.965 "bdev_io_pool_size": 65535, 00:16:54.965 "iobuf_large_cache_size": 16, 00:16:54.965 "iobuf_small_cache_size": 128 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_raid_set_options", 00:16:54.965 "params": { 00:16:54.965 "process_window_size_kb": 1024 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_iscsi_set_options", 00:16:54.965 "params": { 00:16:54.965 "timeout_sec": 30 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_nvme_set_options", 00:16:54.965 "params": { 00:16:54.965 "action_on_timeout": "none", 00:16:54.965 "allow_accel_sequence": false, 00:16:54.965 "arbitration_burst": 0, 00:16:54.965 "bdev_retry_count": 3, 00:16:54.965 "ctrlr_loss_timeout_sec": 0, 00:16:54.965 "delay_cmd_submit": true, 00:16:54.965 "dhchap_dhgroups": [ 00:16:54.965 "null", 00:16:54.965 "ffdhe2048", 00:16:54.965 "ffdhe3072", 00:16:54.965 "ffdhe4096", 00:16:54.965 "ffdhe6144", 00:16:54.965 "ffdhe8192" 00:16:54.965 ], 00:16:54.965 "dhchap_digests": [ 00:16:54.965 "sha256", 00:16:54.965 "sha384", 00:16:54.965 "sha512" 00:16:54.965 ], 00:16:54.965 "disable_auto_failback": false, 00:16:54.965 "fast_io_fail_timeout_sec": 0, 00:16:54.965 "generate_uuids": false, 00:16:54.965 "high_priority_weight": 0, 00:16:54.965 "io_path_stat": false, 00:16:54.965 "io_queue_requests": 512, 00:16:54.965 "keep_alive_timeout_ms": 10000, 00:16:54.965 "low_priority_weight": 0, 00:16:54.965 "medium_priority_weight": 0, 00:16:54.965 "nvme_adminq_poll_period_us": 10000, 00:16:54.965 "nvme_error_stat": false, 00:16:54.965 "nvme_ioq_poll_period_us": 0, 00:16:54.965 "rdma_cm_event_timeout_ms": 0, 00:16:54.965 "rdma_max_cq_size": 0, 00:16:54.965 "rdma_srq_size": 0, 00:16:54.965 "reconnect_delay_sec": 0, 00:16:54.965 "timeout_admin_us": 0, 00:16:54.965 "timeout_us": 0, 00:16:54.965 "transport_ack_timeout": 0, 00:16:54.965 "transport_retry_count": 4, 00:16:54.965 "transport_tos": 0 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_nvme_attach_controller", 00:16:54.965 "params": { 00:16:54.965 "adrfam": "IPv4", 00:16:54.965 "ctrlr_loss_timeout_sec": 0, 00:16:54.965 "ddgst": false, 00:16:54.965 "fast_io_fail_timeout_sec": 0, 00:16:54.965 "hdgst": false, 00:16:54.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.965 "name": "TLSTEST", 00:16:54.965 "prchk_guard": false, 00:16:54.965 "prchk_reftag": false, 00:16:54.965 "psk": "/tmp/tmp.ZbN0wLiCj5", 00:16:54.965 "reconnect_delay_sec": 0, 00:16:54.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.965 "traddr": "10.0.0.2", 00:16:54.965 "trsvcid": "4420", 00:16:54.965 "trtype": "TCP" 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_nvme_set_hotplug", 00:16:54.965 "params": { 00:16:54.965 "enable": false, 00:16:54.965 "period_us": 100000 00:16:54.965 } 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "method": "bdev_wait_for_examine" 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 }, 00:16:54.965 { 00:16:54.965 "subsystem": "nbd", 00:16:54.965 "config": [] 00:16:54.965 } 00:16:54.965 ] 00:16:54.965 }' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84766 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84766 ']' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84766 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84766 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:54.965 killing process with pid 84766 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84766' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84766 00:16:54.965 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.965 00:16:54.965 Latency(us) 00:16:54.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.965 =================================================================================================================== 00:16:54.965 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.965 [2024-07-12 14:56:33.424445] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84766 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84677 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84677 ']' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84677 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84677 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:54.965 killing process with pid 84677 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84677' 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84677 00:16:54.965 [2024-07-12 14:56:33.613124] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:54.965 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84677 00:16:55.225 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:55.225 14:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:55.225 "subsystems": [ 00:16:55.225 { 00:16:55.225 "subsystem": "keyring", 00:16:55.225 "config": [] 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "subsystem": "iobuf", 00:16:55.225 "config": [ 00:16:55.225 { 00:16:55.225 "method": "iobuf_set_options", 00:16:55.225 "params": { 00:16:55.225 "large_bufsize": 135168, 00:16:55.225 "large_pool_count": 1024, 00:16:55.225 "small_bufsize": 8192, 00:16:55.225 "small_pool_count": 8192 00:16:55.225 } 00:16:55.225 } 00:16:55.225 ] 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "subsystem": "sock", 00:16:55.225 "config": [ 00:16:55.225 { 00:16:55.225 "method": "sock_set_default_impl", 00:16:55.225 "params": { 00:16:55.225 "impl_name": "posix" 00:16:55.225 } 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "method": "sock_impl_set_options", 00:16:55.225 "params": { 00:16:55.225 "enable_ktls": false, 00:16:55.225 "enable_placement_id": 0, 00:16:55.225 "enable_quickack": false, 00:16:55.225 "enable_recv_pipe": true, 00:16:55.225 "enable_zerocopy_send_client": false, 00:16:55.225 "enable_zerocopy_send_server": true, 00:16:55.225 "impl_name": "ssl", 00:16:55.225 "recv_buf_size": 4096, 00:16:55.225 "send_buf_size": 4096, 00:16:55.225 "tls_version": 0, 00:16:55.225 "zerocopy_threshold": 0 00:16:55.225 } 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "method": "sock_impl_set_options", 00:16:55.225 "params": { 00:16:55.225 "enable_ktls": false, 00:16:55.225 "enable_placement_id": 0, 00:16:55.225 "enable_quickack": false, 00:16:55.225 "enable_recv_pipe": true, 00:16:55.225 "enable_zerocopy_send_client": false, 00:16:55.225 "enable_zerocopy_send_server": true, 00:16:55.225 "impl_name": "posix", 00:16:55.225 "recv_buf_size": 2097152, 00:16:55.225 "send_buf_size": 2097152, 00:16:55.225 "tls_version": 0, 00:16:55.225 "zerocopy_threshold": 0 00:16:55.225 } 00:16:55.225 } 00:16:55.225 ] 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "subsystem": "vmd", 00:16:55.225 "config": [] 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "subsystem": "accel", 00:16:55.225 "config": [ 00:16:55.225 { 00:16:55.225 "method": "accel_set_options", 00:16:55.225 "params": { 00:16:55.225 "buf_count": 2048, 00:16:55.225 "large_cache_size": 16, 00:16:55.225 "sequence_count": 2048, 00:16:55.225 "small_cache_size": 128, 00:16:55.225 "task_count": 2048 00:16:55.225 } 00:16:55.225 } 00:16:55.225 ] 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "subsystem": "bdev", 00:16:55.225 "config": [ 00:16:55.225 { 00:16:55.225 "method": "bdev_set_options", 00:16:55.225 "params": { 00:16:55.225 "bdev_auto_examine": true, 00:16:55.225 "bdev_io_cache_size": 256, 00:16:55.225 "bdev_io_pool_size": 65535, 00:16:55.225 "iobuf_large_cache_size": 16, 00:16:55.225 "iobuf_small_cache_size": 128 00:16:55.225 } 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "method": "bdev_raid_set_options", 00:16:55.225 "params": { 00:16:55.225 "process_window_size_kb": 1024 00:16:55.225 } 00:16:55.225 }, 00:16:55.225 { 00:16:55.225 "method": "bdev_iscsi_set_options", 00:16:55.225 "params": { 00:16:55.225 "timeout_sec": 30 00:16:55.225 } 00:16:55.225 }, 00:16:55.226 { 00:16:55.226 "method": "bdev_nvme_set_options", 00:16:55.226 "params": { 00:16:55.226 "action_on_timeout": "none", 00:16:55.226 "allow_accel_sequence": false, 00:16:55.226 "arbitration_burst": 0, 00:16:55.226 "bdev_retry_count": 3, 00:16:55.226 "ctrlr_loss_timeout_sec": 0, 00:16:55.226 "delay_cmd_submit": true, 00:16:55.226 "dhchap_dhgroups": [ 00:16:55.226 "null", 00:16:55.226 "ffdhe2048", 00:16:55.226 "ffdhe3072", 00:16:55.226 "ffdhe4096", 00:16:55.226 "ffdhe6144", 00:16:55.226 "ffdhe8192" 00:16:55.226 ], 00:16:55.226 "dhchap_digests": [ 00:16:55.226 "sha256", 00:16:55.226 "sha384", 00:16:55.226 "sha512" 00:16:55.226 ], 00:16:55.226 "disable_auto_failback": false, 00:16:55.226 "fast_io_fail_timeout_sec": 0, 00:16:55.226 "generate_uuids": false, 00:16:55.226 "high_priority_weight": 0, 00:16:55.226 "io_path_stat": false, 00:16:55.226 "io_queue_requests": 0, 00:16:55.226 "keep_alive_timeout_ms": 10000, 00:16:55.226 "low_priority_weight": 0, 00:16:55.226 "medium_priority_weight": 0, 00:16:55.226 "nvme_adminq_poll_period_us": 10000, 00:16:55.226 "nvme_error_stat": false, 00:16:55.226 "nvme_ioq_poll_period_us": 0, 00:16:55.226 "rdma_cm_event_timeout_ms": 0, 00:16:55.226 "rdma_max_cq_size": 0, 00:16:55.226 "rdma_srq_size": 0, 00:16:55.226 "reconnect_delay_sec": 0, 00:16:55.226 "timeout_admin_us": 0, 00:16:55.226 "timeout_us": 0, 00:16:55.226 "transport_ack_timeout": 0, 00:16:55.226 "transport_retry_count": 4, 00:16:55.226 "transport_tos": 0 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "bdev_nvme_set_hotplug", 00:16:55.226 "params": { 00:16:55.226 "enable": false, 00:16:55.226 "period_us": 100000 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "bdev_malloc_create", 00:16:55.226 "params": { 00:16:55.226 "block_size": 4096, 00:16:55.226 "name": "malloc0", 00:16:55.226 "num_blocks": 8192, 00:16:55.226 "optimal_io_boundary": 0, 00:16:55.226 "physical_block_size": 4096, 00:16:55.226 "uuid": "328e7023-01dc-4c1d-926a-1575d53880c2" 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "bdev_wait_for_examine" 00:16:55.226 } 00:16:55.226 ] 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "subsystem": "nbd", 00:16:55.226 "config": [] 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "subsystem": "scheduler", 00:16:55.226 "config": [ 00:16:55.226 { 00:16:55.226 "method": "framework_set_scheduler", 00:16:55.226 "params": { 00:16:55.226 "name": "static" 00:16:55.226 } 00:16:55.226 } 00:16:55.226 ] 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "subsystem": "nvmf", 00:16:55.226 "config": [ 00:16:55.226 { 00:16:55.226 "method": "nvmf_set_config", 00:16:55.226 "params": { 00:16:55.226 "admin_cmd_passthru": { 00:16:55.226 "identify_ctrlr": false 00:16:55.226 }, 00:16:55.226 "discovery_filter": "match_any" 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_set_max_subsystems", 00:16:55.226 "params": { 00:16:55.226 "max_subsystems": 1024 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_set_crdt", 00:16:55.226 "params": { 00:16:55.226 "crdt1": 0, 00:16:55.226 "crdt2": 0, 00:16:55.226 "crdt3": 0 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_create_transport", 00:16:55.226 "params": { 00:16:55.226 "abort_timeout_sec": 1, 00:16:55.226 "ack_timeout": 0, 00:16:55.226 "buf_cache_size": 4294967295, 00:16:55.226 "c2h_success": false, 00:16:55.226 "data_wr_pool_size": 0, 00:16:55.226 "dif_insert_or_strip": false, 00:16:55.226 "in_capsule_data_size": 4096, 00:16:55.226 "io_unit_size": 131072, 00:16:55.226 "max_aq_depth": 128, 00:16:55.226 "max_io_qpairs_per_ctrlr": 127, 00:16:55.226 "max_io_size": 131072, 00:16:55.226 "max_queue_depth": 128, 00:16:55.226 "num_shared_buffers": 511, 00:16:55.226 "sock_priority": 0, 00:16:55.226 "trtype": "TCP", 00:16:55.226 "zcopy": false 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_create_subsystem", 00:16:55.226 "params": { 00:16:55.226 "allow_any_host": false, 00:16:55.226 "ana_reporting": false, 00:16:55.226 "max_cntlid": 65519, 00:16:55.226 "max_namespaces": 10, 00:16:55.226 "min_cntlid": 1, 00:16:55.226 "model_number": "SPDK bdev Controller", 00:16:55.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.226 "serial_number": "SPDK00000000000001" 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_subsystem_add_host", 00:16:55.226 "params": { 00:16:55.226 "host": "nqn.2016-06.io.spdk:host1", 00:16:55.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.226 "psk": "/tmp/tmp.ZbN0wLiCj5" 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_subsystem_add_ns", 00:16:55.226 "params": { 00:16:55.226 "namespace": { 00:16:55.226 "bdev_name": "malloc0", 00:16:55.226 "nguid": "328E702301DC4C1D926A1575D53880C2", 00:16:55.226 "no_auto_visible": false, 00:16:55.226 "nsid": 1, 00:16:55.226 "uuid": "328e7023-01dc-4c1d-926a-1575d53880c2" 00:16:55.226 }, 00:16:55.226 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:55.226 } 00:16:55.226 }, 00:16:55.226 { 00:16:55.226 "method": "nvmf_subsystem_add_listener", 00:16:55.227 "params": { 00:16:55.227 "listen_address": { 00:16:55.227 "adrfam": "IPv4", 00:16:55.227 "traddr": "10.0.0.2", 00:16:55.227 "trsvcid": "4420", 00:16:55.227 "trtype": "TCP" 00:16:55.227 }, 00:16:55.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.227 "secure_channel": true 00:16:55.227 } 00:16:55.227 } 00:16:55.227 ] 00:16:55.227 } 00:16:55.227 ] 00:16:55.227 }' 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84840 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84840 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84840 ']' 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.227 14:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.227 [2024-07-12 14:56:33.850999] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:55.227 [2024-07-12 14:56:33.851108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.485 [2024-07-12 14:56:33.991337] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.485 [2024-07-12 14:56:34.058558] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.485 [2024-07-12 14:56:34.058616] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.485 [2024-07-12 14:56:34.058629] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.485 [2024-07-12 14:56:34.058639] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.485 [2024-07-12 14:56:34.058647] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.485 [2024-07-12 14:56:34.058733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.744 [2024-07-12 14:56:34.249911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.744 [2024-07-12 14:56:34.265741] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:55.744 [2024-07-12 14:56:34.281757] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.744 [2024-07-12 14:56:34.281995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84883 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84883 /var/tmp/bdevperf.sock 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84883 ']' 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.311 14:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:56.311 "subsystems": [ 00:16:56.311 { 00:16:56.311 "subsystem": "keyring", 00:16:56.311 "config": [] 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "subsystem": "iobuf", 00:16:56.311 "config": [ 00:16:56.311 { 00:16:56.311 "method": "iobuf_set_options", 00:16:56.311 "params": { 00:16:56.311 "large_bufsize": 135168, 00:16:56.311 "large_pool_count": 1024, 00:16:56.311 "small_bufsize": 8192, 00:16:56.311 "small_pool_count": 8192 00:16:56.311 } 00:16:56.311 } 00:16:56.311 ] 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "subsystem": "sock", 00:16:56.311 "config": [ 00:16:56.311 { 00:16:56.311 "method": "sock_set_default_impl", 00:16:56.311 "params": { 00:16:56.311 "impl_name": "posix" 00:16:56.311 } 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "method": "sock_impl_set_options", 00:16:56.311 "params": { 00:16:56.311 "enable_ktls": false, 00:16:56.311 "enable_placement_id": 0, 00:16:56.311 "enable_quickack": false, 00:16:56.311 "enable_recv_pipe": true, 00:16:56.311 "enable_zerocopy_send_client": false, 00:16:56.311 "enable_zerocopy_send_server": true, 00:16:56.311 "impl_name": "ssl", 00:16:56.311 "recv_buf_size": 4096, 00:16:56.311 "send_buf_size": 4096, 00:16:56.311 "tls_version": 0, 00:16:56.311 "zerocopy_threshold": 0 00:16:56.311 } 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "method": "sock_impl_set_options", 00:16:56.311 "params": { 00:16:56.311 "enable_ktls": false, 00:16:56.311 "enable_placement_id": 0, 00:16:56.311 "enable_quickack": false, 00:16:56.311 "enable_recv_pipe": true, 00:16:56.311 "enable_zerocopy_send_client": false, 00:16:56.311 "enable_zerocopy_send_server": true, 00:16:56.311 "impl_name": "posix", 00:16:56.311 "recv_buf_size": 2097152, 00:16:56.311 "send_buf_size": 2097152, 00:16:56.311 "tls_version": 0, 00:16:56.311 "zerocopy_threshold": 0 00:16:56.311 } 00:16:56.311 } 00:16:56.311 ] 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "subsystem": "vmd", 00:16:56.311 "config": [] 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "subsystem": "accel", 00:16:56.311 "config": [ 00:16:56.311 { 00:16:56.311 "method": "accel_set_options", 00:16:56.311 "params": { 00:16:56.311 "buf_count": 2048, 00:16:56.311 "large_cache_size": 16, 00:16:56.311 "sequence_count": 2048, 00:16:56.311 "small_cache_size": 128, 00:16:56.311 "task_count": 2048 00:16:56.311 } 00:16:56.311 } 00:16:56.311 ] 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "subsystem": "bdev", 00:16:56.311 "config": [ 00:16:56.311 { 00:16:56.311 "method": "bdev_set_options", 00:16:56.311 "params": { 00:16:56.311 "bdev_auto_examine": true, 00:16:56.311 "bdev_io_cache_size": 256, 00:16:56.311 "bdev_io_pool_size": 65535, 00:16:56.311 "iobuf_large_cache_size": 16, 00:16:56.311 "iobuf_small_cache_size": 128 00:16:56.311 } 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "method": "bdev_raid_set_options", 00:16:56.311 "params": { 00:16:56.311 "process_window_size_kb": 1024 00:16:56.311 } 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "method": "bdev_iscsi_set_options", 00:16:56.311 "params": { 00:16:56.311 "timeout_sec": 30 00:16:56.311 } 00:16:56.311 }, 00:16:56.311 { 00:16:56.311 "method": "bdev_nvme_set_options", 00:16:56.311 "params": { 00:16:56.311 "action_on_timeout": "none", 00:16:56.311 "allow_accel_sequence": false, 00:16:56.311 "arbitration_burst": 0, 00:16:56.311 "bdev_retry_count": 3, 00:16:56.311 "ctrlr_loss_timeout_sec": 0, 00:16:56.311 "delay_cmd_submit": true, 00:16:56.311 "dhchap_dhgroups": [ 00:16:56.311 "null", 00:16:56.311 "ffdhe2048", 00:16:56.311 "ffdhe3072", 00:16:56.311 "ffdhe4096", 00:16:56.311 "ffdhe6144", 00:16:56.311 "ffdhe8192" 00:16:56.311 ], 00:16:56.311 "dhchap_digests": [ 00:16:56.311 "sha256", 00:16:56.311 "sha384", 00:16:56.311 "sha512" 00:16:56.311 ], 00:16:56.311 "disable_auto_failback": false, 00:16:56.311 "fast_io_fail_timeout_sec": 0, 00:16:56.311 "generate_uuids": false, 00:16:56.311 "high_priority_weight": 0, 00:16:56.311 "io_path_stat": false, 00:16:56.312 "io_queue_requests": 512, 00:16:56.312 "keep_alive_timeout_ms": 10000, 00:16:56.312 "low_priority_weight": 0, 00:16:56.312 "medium_priority_weight": 0, 00:16:56.312 "nvme_adminq_poll_period_us": 10000, 00:16:56.312 "nvme_error_stat": false, 00:16:56.312 "nvme_ioq_poll_period_us": 0, 00:16:56.312 "rdma_cm_event_timeout_ms": 0, 00:16:56.312 "rdma_max_cq_size": 0, 00:16:56.312 "rdma_srq_size": 0, 00:16:56.312 "reconnect_delay_sec": 0, 00:16:56.312 "timeout_admin_us": 0, 00:16:56.312 "timeout_us": 0, 00:16:56.312 "transport_ack_timeout": 0, 00:16:56.312 "transport_retry_count": 4, 00:16:56.312 "transport_tos": 0 00:16:56.312 } 00:16:56.312 }, 00:16:56.312 { 00:16:56.312 "method": "bdev_nvme_attach_controller", 00:16:56.312 "params": { 00:16:56.312 "adrfam": "IPv4", 00:16:56.312 "ctrlr_loss_timeout_sec": 0, 00:16:56.312 "ddgst": false, 00:16:56.312 "fast_io_fail_timeout_sec": 0, 00:16:56.312 "hdgst": false, 00:16:56.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.312 "name": "TLSTEST", 00:16:56.312 "prchk_guard": false, 00:16:56.312 "prchk_reftag": false, 00:16:56.312 "psk": "/tmp/tmp.ZbN0wLiCj5", 00:16:56.312 "reconnect_delay_sec": 0, 00:16:56.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.312 "traddr": "10.0.0.2", 00:16:56.312 "trsvcid": "4420", 00:16:56.312 "trtype": "TCP" 00:16:56.312 } 00:16:56.312 }, 00:16:56.312 { 00:16:56.312 "method": "bdev_nvme_set_hotplug", 00:16:56.312 "params": { 00:16:56.312 "enable": false, 00:16:56.312 "period_us": 100000 00:16:56.312 } 00:16:56.312 }, 00:16:56.312 { 00:16:56.312 "method": "bdev_wait_for_examine" 00:16:56.312 } 00:16:56.312 ] 00:16:56.312 }, 00:16:56.312 { 00:16:56.312 "subsystem": "nbd", 00:16:56.312 "config": [] 00:16:56.312 } 00:16:56.312 ] 00:16:56.312 }' 00:16:56.312 14:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.312 [2024-07-12 14:56:34.934550] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:16:56.312 [2024-07-12 14:56:34.934648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84883 ] 00:16:56.570 [2024-07-12 14:56:35.069058] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.570 [2024-07-12 14:56:35.155708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.829 [2024-07-12 14:56:35.283805] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.829 [2024-07-12 14:56:35.283917] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:57.775 14:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.775 14:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:57.775 14:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:57.775 Running I/O for 10 seconds... 00:17:07.761 00:17:07.761 Latency(us) 00:17:07.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.761 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:07.761 Verification LBA range: start 0x0 length 0x2000 00:17:07.761 TLSTESTn1 : 10.02 3510.66 13.71 0.00 0.00 36389.31 6166.34 36223.53 00:17:07.761 =================================================================================================================== 00:17:07.761 Total : 3510.66 13.71 0.00 0.00 36389.31 6166.34 36223.53 00:17:07.761 0 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84883 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84883 ']' 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84883 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84883 00:17:07.761 killing process with pid 84883 00:17:07.761 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.761 00:17:07.761 Latency(us) 00:17:07.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.761 =================================================================================================================== 00:17:07.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84883' 00:17:07.761 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84883 00:17:07.761 [2024-07-12 14:56:46.277057] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:07.762 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84883 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84840 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84840 ']' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84840 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84840 00:17:08.020 killing process with pid 84840 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84840' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84840 00:17:08.020 [2024-07-12 14:56:46.460119] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84840 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85033 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85033 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85033 ']' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.020 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.279 [2024-07-12 14:56:46.687216] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:08.279 [2024-07-12 14:56:46.687330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.279 [2024-07-12 14:56:46.823452] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.279 [2024-07-12 14:56:46.880173] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.279 [2024-07-12 14:56:46.880231] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.279 [2024-07-12 14:56:46.880243] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.279 [2024-07-12 14:56:46.880251] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.279 [2024-07-12 14:56:46.880258] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.279 [2024-07-12 14:56:46.880288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ZbN0wLiCj5 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZbN0wLiCj5 00:17:08.537 14:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.795 [2024-07-12 14:56:47.250438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.795 14:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:09.053 14:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:09.312 [2024-07-12 14:56:47.818539] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.312 [2024-07-12 14:56:47.818749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.312 14:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:09.571 malloc0 00:17:09.571 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.858 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZbN0wLiCj5 00:17:10.424 [2024-07-12 14:56:48.793288] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85124 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85124 /var/tmp/bdevperf.sock 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85124 ']' 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.424 14:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.424 [2024-07-12 14:56:48.859785] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:10.424 [2024-07-12 14:56:48.859878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85124 ] 00:17:10.424 [2024-07-12 14:56:48.990919] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.424 [2024-07-12 14:56:49.068043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.682 14:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.682 14:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:10.682 14:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZbN0wLiCj5 00:17:10.940 14:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:11.197 [2024-07-12 14:56:49.767354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.197 nvme0n1 00:17:11.454 14:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:11.454 Running I/O for 1 seconds... 00:17:12.386 00:17:12.386 Latency(us) 00:17:12.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.386 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.386 Verification LBA range: start 0x0 length 0x2000 00:17:12.386 nvme0n1 : 1.02 3869.50 15.12 0.00 0.00 32724.41 6017.40 27286.81 00:17:12.386 =================================================================================================================== 00:17:12.386 Total : 3869.50 15.12 0.00 0.00 32724.41 6017.40 27286.81 00:17:12.386 0 00:17:12.386 14:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85124 00:17:12.386 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85124 ']' 00:17:12.386 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85124 00:17:12.386 14:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85124 00:17:12.386 killing process with pid 85124 00:17:12.386 Received shutdown signal, test time was about 1.000000 seconds 00:17:12.386 00:17:12.386 Latency(us) 00:17:12.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.386 =================================================================================================================== 00:17:12.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85124' 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85124 00:17:12.386 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85124 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85033 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85033 ']' 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85033 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85033 00:17:12.643 killing process with pid 85033 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85033' 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85033 00:17:12.643 [2024-07-12 14:56:51.209439] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:12.643 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85033 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85180 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85180 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85180 ']' 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.901 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.901 [2024-07-12 14:56:51.443143] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:12.901 [2024-07-12 14:56:51.443251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.160 [2024-07-12 14:56:51.580963] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.160 [2024-07-12 14:56:51.638166] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.160 [2024-07-12 14:56:51.638211] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.160 [2024-07-12 14:56:51.638223] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.160 [2024-07-12 14:56:51.638231] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.160 [2024-07-12 14:56:51.638238] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.160 [2024-07-12 14:56:51.638268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.160 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.160 [2024-07-12 14:56:51.764782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.160 malloc0 00:17:13.160 [2024-07-12 14:56:51.791146] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.160 [2024-07-12 14:56:51.791341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85222 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85222 /var/tmp/bdevperf.sock 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85222 ']' 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.418 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.419 14:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.419 [2024-07-12 14:56:51.875575] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:13.419 [2024-07-12 14:56:51.875671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85222 ] 00:17:13.419 [2024-07-12 14:56:52.020323] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.676 [2024-07-12 14:56:52.079884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.242 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.242 14:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:14.242 14:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZbN0wLiCj5 00:17:14.499 14:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:14.756 [2024-07-12 14:56:53.341218] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.014 nvme0n1 00:17:15.014 14:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.014 Running I/O for 1 seconds... 00:17:15.948 00:17:15.948 Latency(us) 00:17:15.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.948 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.948 Verification LBA range: start 0x0 length 0x2000 00:17:15.948 nvme0n1 : 1.02 3629.94 14.18 0.00 0.00 34857.47 8638.84 22758.87 00:17:15.948 =================================================================================================================== 00:17:15.948 Total : 3629.94 14.18 0.00 0.00 34857.47 8638.84 22758.87 00:17:15.948 0 00:17:15.948 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:15.948 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.948 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.207 14:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.207 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:17:16.207 "subsystems": [ 00:17:16.207 { 00:17:16.207 "subsystem": "keyring", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "keyring_file_add_key", 00:17:16.207 "params": { 00:17:16.207 "name": "key0", 00:17:16.207 "path": "/tmp/tmp.ZbN0wLiCj5" 00:17:16.207 } 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "iobuf", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "iobuf_set_options", 00:17:16.207 "params": { 00:17:16.207 "large_bufsize": 135168, 00:17:16.207 "large_pool_count": 1024, 00:17:16.207 "small_bufsize": 8192, 00:17:16.207 "small_pool_count": 8192 00:17:16.207 } 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "sock", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "sock_set_default_impl", 00:17:16.207 "params": { 00:17:16.207 "impl_name": "posix" 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "sock_impl_set_options", 00:17:16.207 "params": { 00:17:16.207 "enable_ktls": false, 00:17:16.207 "enable_placement_id": 0, 00:17:16.207 "enable_quickack": false, 00:17:16.207 "enable_recv_pipe": true, 00:17:16.207 "enable_zerocopy_send_client": false, 00:17:16.207 "enable_zerocopy_send_server": true, 00:17:16.207 "impl_name": "ssl", 00:17:16.207 "recv_buf_size": 4096, 00:17:16.207 "send_buf_size": 4096, 00:17:16.207 "tls_version": 0, 00:17:16.207 "zerocopy_threshold": 0 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "sock_impl_set_options", 00:17:16.207 "params": { 00:17:16.207 "enable_ktls": false, 00:17:16.207 "enable_placement_id": 0, 00:17:16.207 "enable_quickack": false, 00:17:16.207 "enable_recv_pipe": true, 00:17:16.207 "enable_zerocopy_send_client": false, 00:17:16.207 "enable_zerocopy_send_server": true, 00:17:16.207 "impl_name": "posix", 00:17:16.207 "recv_buf_size": 2097152, 00:17:16.207 "send_buf_size": 2097152, 00:17:16.207 "tls_version": 0, 00:17:16.207 "zerocopy_threshold": 0 00:17:16.207 } 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "vmd", 00:17:16.207 "config": [] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "accel", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "accel_set_options", 00:17:16.207 "params": { 00:17:16.207 "buf_count": 2048, 00:17:16.207 "large_cache_size": 16, 00:17:16.207 "sequence_count": 2048, 00:17:16.207 "small_cache_size": 128, 00:17:16.207 "task_count": 2048 00:17:16.207 } 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "bdev", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "bdev_set_options", 00:17:16.207 "params": { 00:17:16.207 "bdev_auto_examine": true, 00:17:16.207 "bdev_io_cache_size": 256, 00:17:16.207 "bdev_io_pool_size": 65535, 00:17:16.207 "iobuf_large_cache_size": 16, 00:17:16.207 "iobuf_small_cache_size": 128 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_raid_set_options", 00:17:16.207 "params": { 00:17:16.207 "process_window_size_kb": 1024 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_iscsi_set_options", 00:17:16.207 "params": { 00:17:16.207 "timeout_sec": 30 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_nvme_set_options", 00:17:16.207 "params": { 00:17:16.207 "action_on_timeout": "none", 00:17:16.207 "allow_accel_sequence": false, 00:17:16.207 "arbitration_burst": 0, 00:17:16.207 "bdev_retry_count": 3, 00:17:16.207 "ctrlr_loss_timeout_sec": 0, 00:17:16.207 "delay_cmd_submit": true, 00:17:16.207 "dhchap_dhgroups": [ 00:17:16.207 "null", 00:17:16.207 "ffdhe2048", 00:17:16.207 "ffdhe3072", 00:17:16.207 "ffdhe4096", 00:17:16.207 "ffdhe6144", 00:17:16.207 "ffdhe8192" 00:17:16.207 ], 00:17:16.207 "dhchap_digests": [ 00:17:16.207 "sha256", 00:17:16.207 "sha384", 00:17:16.207 "sha512" 00:17:16.207 ], 00:17:16.207 "disable_auto_failback": false, 00:17:16.207 "fast_io_fail_timeout_sec": 0, 00:17:16.207 "generate_uuids": false, 00:17:16.207 "high_priority_weight": 0, 00:17:16.207 "io_path_stat": false, 00:17:16.207 "io_queue_requests": 0, 00:17:16.207 "keep_alive_timeout_ms": 10000, 00:17:16.207 "low_priority_weight": 0, 00:17:16.207 "medium_priority_weight": 0, 00:17:16.207 "nvme_adminq_poll_period_us": 10000, 00:17:16.207 "nvme_error_stat": false, 00:17:16.207 "nvme_ioq_poll_period_us": 0, 00:17:16.207 "rdma_cm_event_timeout_ms": 0, 00:17:16.207 "rdma_max_cq_size": 0, 00:17:16.207 "rdma_srq_size": 0, 00:17:16.207 "reconnect_delay_sec": 0, 00:17:16.207 "timeout_admin_us": 0, 00:17:16.207 "timeout_us": 0, 00:17:16.207 "transport_ack_timeout": 0, 00:17:16.207 "transport_retry_count": 4, 00:17:16.207 "transport_tos": 0 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_nvme_set_hotplug", 00:17:16.207 "params": { 00:17:16.207 "enable": false, 00:17:16.207 "period_us": 100000 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_malloc_create", 00:17:16.207 "params": { 00:17:16.207 "block_size": 4096, 00:17:16.207 "name": "malloc0", 00:17:16.207 "num_blocks": 8192, 00:17:16.207 "optimal_io_boundary": 0, 00:17:16.207 "physical_block_size": 4096, 00:17:16.207 "uuid": "17334b96-8442-40b7-bccb-1cbd44f184dc" 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "bdev_wait_for_examine" 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "nbd", 00:17:16.207 "config": [] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "scheduler", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "framework_set_scheduler", 00:17:16.207 "params": { 00:17:16.207 "name": "static" 00:17:16.207 } 00:17:16.207 } 00:17:16.207 ] 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "subsystem": "nvmf", 00:17:16.207 "config": [ 00:17:16.207 { 00:17:16.207 "method": "nvmf_set_config", 00:17:16.207 "params": { 00:17:16.207 "admin_cmd_passthru": { 00:17:16.207 "identify_ctrlr": false 00:17:16.207 }, 00:17:16.207 "discovery_filter": "match_any" 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "nvmf_set_max_subsystems", 00:17:16.207 "params": { 00:17:16.207 "max_subsystems": 1024 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "nvmf_set_crdt", 00:17:16.207 "params": { 00:17:16.207 "crdt1": 0, 00:17:16.207 "crdt2": 0, 00:17:16.207 "crdt3": 0 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "nvmf_create_transport", 00:17:16.207 "params": { 00:17:16.207 "abort_timeout_sec": 1, 00:17:16.207 "ack_timeout": 0, 00:17:16.207 "buf_cache_size": 4294967295, 00:17:16.207 "c2h_success": false, 00:17:16.207 "data_wr_pool_size": 0, 00:17:16.207 "dif_insert_or_strip": false, 00:17:16.207 "in_capsule_data_size": 4096, 00:17:16.207 "io_unit_size": 131072, 00:17:16.207 "max_aq_depth": 128, 00:17:16.207 "max_io_qpairs_per_ctrlr": 127, 00:17:16.207 "max_io_size": 131072, 00:17:16.207 "max_queue_depth": 128, 00:17:16.207 "num_shared_buffers": 511, 00:17:16.207 "sock_priority": 0, 00:17:16.207 "trtype": "TCP", 00:17:16.207 "zcopy": false 00:17:16.207 } 00:17:16.207 }, 00:17:16.207 { 00:17:16.207 "method": "nvmf_create_subsystem", 00:17:16.207 "params": { 00:17:16.207 "allow_any_host": false, 00:17:16.207 "ana_reporting": false, 00:17:16.207 "max_cntlid": 65519, 00:17:16.207 "max_namespaces": 32, 00:17:16.207 "min_cntlid": 1, 00:17:16.207 "model_number": "SPDK bdev Controller", 00:17:16.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.207 "serial_number": "00000000000000000000" 00:17:16.207 } 00:17:16.207 }, 00:17:16.208 { 00:17:16.208 "method": "nvmf_subsystem_add_host", 00:17:16.208 "params": { 00:17:16.208 "host": "nqn.2016-06.io.spdk:host1", 00:17:16.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.208 "psk": "key0" 00:17:16.208 } 00:17:16.208 }, 00:17:16.208 { 00:17:16.208 "method": "nvmf_subsystem_add_ns", 00:17:16.208 "params": { 00:17:16.208 "namespace": { 00:17:16.208 "bdev_name": "malloc0", 00:17:16.208 "nguid": "17334B96844240B7BCCB1CBD44F184DC", 00:17:16.208 "no_auto_visible": false, 00:17:16.208 "nsid": 1, 00:17:16.208 "uuid": "17334b96-8442-40b7-bccb-1cbd44f184dc" 00:17:16.208 }, 00:17:16.208 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:16.208 } 00:17:16.208 }, 00:17:16.208 { 00:17:16.208 "method": "nvmf_subsystem_add_listener", 00:17:16.208 "params": { 00:17:16.208 "listen_address": { 00:17:16.208 "adrfam": "IPv4", 00:17:16.208 "traddr": "10.0.0.2", 00:17:16.208 "trsvcid": "4420", 00:17:16.208 "trtype": "TCP" 00:17:16.208 }, 00:17:16.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.208 "secure_channel": true 00:17:16.208 } 00:17:16.208 } 00:17:16.208 ] 00:17:16.208 } 00:17:16.208 ] 00:17:16.208 }' 00:17:16.208 14:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:16.466 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:17:16.466 "subsystems": [ 00:17:16.466 { 00:17:16.466 "subsystem": "keyring", 00:17:16.466 "config": [ 00:17:16.466 { 00:17:16.466 "method": "keyring_file_add_key", 00:17:16.466 "params": { 00:17:16.466 "name": "key0", 00:17:16.466 "path": "/tmp/tmp.ZbN0wLiCj5" 00:17:16.466 } 00:17:16.466 } 00:17:16.466 ] 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "subsystem": "iobuf", 00:17:16.466 "config": [ 00:17:16.466 { 00:17:16.466 "method": "iobuf_set_options", 00:17:16.466 "params": { 00:17:16.466 "large_bufsize": 135168, 00:17:16.466 "large_pool_count": 1024, 00:17:16.466 "small_bufsize": 8192, 00:17:16.466 "small_pool_count": 8192 00:17:16.466 } 00:17:16.466 } 00:17:16.466 ] 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "subsystem": "sock", 00:17:16.466 "config": [ 00:17:16.466 { 00:17:16.466 "method": "sock_set_default_impl", 00:17:16.466 "params": { 00:17:16.466 "impl_name": "posix" 00:17:16.466 } 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "method": "sock_impl_set_options", 00:17:16.466 "params": { 00:17:16.466 "enable_ktls": false, 00:17:16.466 "enable_placement_id": 0, 00:17:16.466 "enable_quickack": false, 00:17:16.466 "enable_recv_pipe": true, 00:17:16.466 "enable_zerocopy_send_client": false, 00:17:16.466 "enable_zerocopy_send_server": true, 00:17:16.466 "impl_name": "ssl", 00:17:16.466 "recv_buf_size": 4096, 00:17:16.466 "send_buf_size": 4096, 00:17:16.466 "tls_version": 0, 00:17:16.466 "zerocopy_threshold": 0 00:17:16.466 } 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "method": "sock_impl_set_options", 00:17:16.466 "params": { 00:17:16.466 "enable_ktls": false, 00:17:16.466 "enable_placement_id": 0, 00:17:16.466 "enable_quickack": false, 00:17:16.466 "enable_recv_pipe": true, 00:17:16.466 "enable_zerocopy_send_client": false, 00:17:16.466 "enable_zerocopy_send_server": true, 00:17:16.466 "impl_name": "posix", 00:17:16.466 "recv_buf_size": 2097152, 00:17:16.466 "send_buf_size": 2097152, 00:17:16.466 "tls_version": 0, 00:17:16.466 "zerocopy_threshold": 0 00:17:16.466 } 00:17:16.466 } 00:17:16.466 ] 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "subsystem": "vmd", 00:17:16.466 "config": [] 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "subsystem": "accel", 00:17:16.466 "config": [ 00:17:16.466 { 00:17:16.466 "method": "accel_set_options", 00:17:16.466 "params": { 00:17:16.466 "buf_count": 2048, 00:17:16.466 "large_cache_size": 16, 00:17:16.466 "sequence_count": 2048, 00:17:16.466 "small_cache_size": 128, 00:17:16.466 "task_count": 2048 00:17:16.466 } 00:17:16.466 } 00:17:16.466 ] 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "subsystem": "bdev", 00:17:16.466 "config": [ 00:17:16.466 { 00:17:16.466 "method": "bdev_set_options", 00:17:16.466 "params": { 00:17:16.466 "bdev_auto_examine": true, 00:17:16.466 "bdev_io_cache_size": 256, 00:17:16.466 "bdev_io_pool_size": 65535, 00:17:16.466 "iobuf_large_cache_size": 16, 00:17:16.466 "iobuf_small_cache_size": 128 00:17:16.466 } 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "method": "bdev_raid_set_options", 00:17:16.466 "params": { 00:17:16.466 "process_window_size_kb": 1024 00:17:16.466 } 00:17:16.466 }, 00:17:16.466 { 00:17:16.466 "method": "bdev_iscsi_set_options", 00:17:16.466 "params": { 00:17:16.466 "timeout_sec": 30 00:17:16.466 } 00:17:16.466 }, 00:17:16.467 { 00:17:16.467 "method": "bdev_nvme_set_options", 00:17:16.467 "params": { 00:17:16.467 "action_on_timeout": "none", 00:17:16.467 "allow_accel_sequence": false, 00:17:16.467 "arbitration_burst": 0, 00:17:16.467 "bdev_retry_count": 3, 00:17:16.467 "ctrlr_loss_timeout_sec": 0, 00:17:16.467 "delay_cmd_submit": true, 00:17:16.467 "dhchap_dhgroups": [ 00:17:16.467 "null", 00:17:16.467 "ffdhe2048", 00:17:16.467 "ffdhe3072", 00:17:16.467 "ffdhe4096", 00:17:16.467 "ffdhe6144", 00:17:16.467 "ffdhe8192" 00:17:16.467 ], 00:17:16.467 "dhchap_digests": [ 00:17:16.467 "sha256", 00:17:16.467 "sha384", 00:17:16.467 "sha512" 00:17:16.467 ], 00:17:16.467 "disable_auto_failback": false, 00:17:16.467 "fast_io_fail_timeout_sec": 0, 00:17:16.467 "generate_uuids": false, 00:17:16.467 "high_priority_weight": 0, 00:17:16.467 "io_path_stat": false, 00:17:16.467 "io_queue_requests": 512, 00:17:16.467 "keep_alive_timeout_ms": 10000, 00:17:16.467 "low_priority_weight": 0, 00:17:16.467 "medium_priority_weight": 0, 00:17:16.467 "nvme_adminq_poll_period_us": 10000, 00:17:16.467 "nvme_error_stat": false, 00:17:16.467 "nvme_ioq_poll_period_us": 0, 00:17:16.467 "rdma_cm_event_timeout_ms": 0, 00:17:16.467 "rdma_max_cq_size": 0, 00:17:16.467 "rdma_srq_size": 0, 00:17:16.467 "reconnect_delay_sec": 0, 00:17:16.467 "timeout_admin_us": 0, 00:17:16.467 "timeout_us": 0, 00:17:16.467 "transport_ack_timeout": 0, 00:17:16.467 "transport_retry_count": 4, 00:17:16.467 "transport_tos": 0 00:17:16.467 } 00:17:16.467 }, 00:17:16.467 { 00:17:16.467 "method": "bdev_nvme_attach_controller", 00:17:16.467 "params": { 00:17:16.467 "adrfam": "IPv4", 00:17:16.467 "ctrlr_loss_timeout_sec": 0, 00:17:16.467 "ddgst": false, 00:17:16.467 "fast_io_fail_timeout_sec": 0, 00:17:16.467 "hdgst": false, 00:17:16.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.467 "name": "nvme0", 00:17:16.467 "prchk_guard": false, 00:17:16.467 "prchk_reftag": false, 00:17:16.467 "psk": "key0", 00:17:16.467 "reconnect_delay_sec": 0, 00:17:16.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.467 "traddr": "10.0.0.2", 00:17:16.467 "trsvcid": "4420", 00:17:16.467 "trtype": "TCP" 00:17:16.467 } 00:17:16.467 }, 00:17:16.467 { 00:17:16.467 "method": "bdev_nvme_set_hotplug", 00:17:16.467 "params": { 00:17:16.467 "enable": false, 00:17:16.467 "period_us": 100000 00:17:16.467 } 00:17:16.467 }, 00:17:16.467 { 00:17:16.467 "method": "bdev_enable_histogram", 00:17:16.467 "params": { 00:17:16.467 "enable": true, 00:17:16.467 "name": "nvme0n1" 00:17:16.467 } 00:17:16.467 }, 00:17:16.467 { 00:17:16.467 "method": "bdev_wait_for_examine" 00:17:16.467 } 00:17:16.467 ] 00:17:16.467 }, 00:17:16.467 { 00:17:16.467 "subsystem": "nbd", 00:17:16.467 "config": [] 00:17:16.467 } 00:17:16.467 ] 00:17:16.467 }' 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85222 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85222 ']' 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85222 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85222 00:17:16.467 killing process with pid 85222 00:17:16.467 Received shutdown signal, test time was about 1.000000 seconds 00:17:16.467 00:17:16.467 Latency(us) 00:17:16.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.467 =================================================================================================================== 00:17:16.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85222' 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85222 00:17:16.467 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85222 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85180 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85180 ']' 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85180 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85180 00:17:16.726 killing process with pid 85180 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85180' 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85180 00:17:16.726 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85180 00:17:16.985 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:16.985 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.985 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.985 14:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:17:16.985 "subsystems": [ 00:17:16.985 { 00:17:16.985 "subsystem": "keyring", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "keyring_file_add_key", 00:17:16.985 "params": { 00:17:16.985 "name": "key0", 00:17:16.985 "path": "/tmp/tmp.ZbN0wLiCj5" 00:17:16.985 } 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "iobuf", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "iobuf_set_options", 00:17:16.985 "params": { 00:17:16.985 "large_bufsize": 135168, 00:17:16.985 "large_pool_count": 1024, 00:17:16.985 "small_bufsize": 8192, 00:17:16.985 "small_pool_count": 8192 00:17:16.985 } 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "sock", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "sock_set_default_impl", 00:17:16.985 "params": { 00:17:16.985 "impl_name": "posix" 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "sock_impl_set_options", 00:17:16.985 "params": { 00:17:16.985 "enable_ktls": false, 00:17:16.985 "enable_placement_id": 0, 00:17:16.985 "enable_quickack": false, 00:17:16.985 "enable_recv_pipe": true, 00:17:16.985 "enable_zerocopy_send_client": false, 00:17:16.985 "enable_zerocopy_send_server": true, 00:17:16.985 "impl_name": "ssl", 00:17:16.985 "recv_buf_size": 4096, 00:17:16.985 "send_buf_size": 4096, 00:17:16.985 "tls_version": 0, 00:17:16.985 "zerocopy_threshold": 0 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "sock_impl_set_options", 00:17:16.985 "params": { 00:17:16.985 "enable_ktls": false, 00:17:16.985 "enable_placement_id": 0, 00:17:16.985 "enable_quickack": false, 00:17:16.985 "enable_recv_pipe": true, 00:17:16.985 "enable_zerocopy_send_client": false, 00:17:16.985 "enable_zerocopy_send_server": true, 00:17:16.985 "impl_name": "posix", 00:17:16.985 "recv_buf_size": 2097152, 00:17:16.985 "send_buf_size": 2097152, 00:17:16.985 "tls_version": 0, 00:17:16.985 "zerocopy_threshold": 0 00:17:16.985 } 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "vmd", 00:17:16.985 "config": [] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "accel", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "accel_set_options", 00:17:16.985 "params": { 00:17:16.985 "buf_count": 2048, 00:17:16.985 "large_cache_size": 16, 00:17:16.985 "sequence_count": 2048, 00:17:16.985 "small_cache_size": 128, 00:17:16.985 "task_count": 2048 00:17:16.985 } 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "bdev", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "bdev_set_options", 00:17:16.985 "params": { 00:17:16.985 "bdev_auto_examine": true, 00:17:16.985 "bdev_io_cache_size": 256, 00:17:16.985 "bdev_io_pool_size": 65535, 00:17:16.985 "iobuf_large_cache_size": 16, 00:17:16.985 "iobuf_small_cache_size": 128 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_raid_set_options", 00:17:16.985 "params": { 00:17:16.985 "process_window_size_kb": 1024 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_iscsi_set_options", 00:17:16.985 "params": { 00:17:16.985 "timeout_sec": 30 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_nvme_set_options", 00:17:16.985 "params": { 00:17:16.985 "action_on_timeout": "none", 00:17:16.985 "allow_accel_sequence": false, 00:17:16.985 "arbitration_burst": 0, 00:17:16.985 "bdev_retry_count": 3, 00:17:16.985 "ctrlr_loss_timeout_sec": 0, 00:17:16.985 "delay_cmd_submit": true, 00:17:16.985 "dhchap_dhgroups": [ 00:17:16.985 "null", 00:17:16.985 "ffdhe2048", 00:17:16.985 "ffdhe3072", 00:17:16.985 "ffdhe4096", 00:17:16.985 "ffdhe6144", 00:17:16.985 "ffdhe8192" 00:17:16.985 ], 00:17:16.985 "dhchap_digests": [ 00:17:16.985 "sha256", 00:17:16.985 "sha384", 00:17:16.985 "sha512" 00:17:16.985 ], 00:17:16.985 "disable_auto_failback": false, 00:17:16.985 "fast_io_fail_timeout_sec": 0, 00:17:16.985 "generate_uuids": false, 00:17:16.985 "high_priority_weight": 0, 00:17:16.985 "io_path_stat": false, 00:17:16.985 "io_queue_requests": 0, 00:17:16.985 "keep_alive_timeout_ms": 10000, 00:17:16.985 "low_priority_weight": 0, 00:17:16.985 "medium_priority_weight": 0, 00:17:16.985 "nvme_adminq_poll_period_us": 10000, 00:17:16.985 "nvme_error_stat": false, 00:17:16.985 "nvme_ioq_poll_period_us": 0, 00:17:16.985 "rdma_cm_event_timeout_ms": 0, 00:17:16.985 "rdma_max_cq_size": 0, 00:17:16.985 "rdma_srq_size": 0, 00:17:16.985 "reconnect_delay_sec": 0, 00:17:16.985 "timeout_admin_us": 0, 00:17:16.985 "timeout_us": 0, 00:17:16.985 "transport_ack_timeout": 0, 00:17:16.985 "transport_retry_count": 4, 00:17:16.985 "transport_tos": 0 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_nvme_set_hotplug", 00:17:16.985 "params": { 00:17:16.985 "enable": false, 00:17:16.985 "period_us": 100000 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_malloc_create", 00:17:16.985 "params": { 00:17:16.985 "block_size": 4096, 00:17:16.985 "name": "malloc0", 00:17:16.985 "num_blocks": 8192, 00:17:16.985 "optimal_io_boundary": 0, 00:17:16.985 "physical_block_size": 4096, 00:17:16.985 "uuid": "17334b96-8442-40b7-bccb-1cbd44f184dc" 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "bdev_wait_for_examine" 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "nbd", 00:17:16.985 "config": [] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "scheduler", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "framework_set_scheduler", 00:17:16.985 "params": { 00:17:16.985 "name": "static" 00:17:16.985 } 00:17:16.985 } 00:17:16.985 ] 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "subsystem": "nvmf", 00:17:16.985 "config": [ 00:17:16.985 { 00:17:16.985 "method": "nvmf_set_config", 00:17:16.985 "params": { 00:17:16.985 "admin_cmd_passthru": { 00:17:16.985 "identify_ctrlr": false 00:17:16.985 }, 00:17:16.985 "discovery_filter": "match_any" 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "nvmf_set_max_subsystems", 00:17:16.985 "params": { 00:17:16.985 "max_subsystems": 1024 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "nvmf_set_crdt", 00:17:16.985 "params": { 00:17:16.985 "crdt1": 0, 00:17:16.985 "crdt2": 0, 00:17:16.985 "crdt3": 0 00:17:16.985 } 00:17:16.985 }, 00:17:16.985 { 00:17:16.985 "method": "nvmf_create_transport", 00:17:16.985 "params": { 00:17:16.985 "abort_timeout_sec": 1, 00:17:16.985 "ack_timeout": 0, 00:17:16.985 "buf_cache_size": 4294967295, 00:17:16.986 "c2h_success": false, 00:17:16.986 "data_wr_pool_size": 0, 00:17:16.986 "dif_insert_or_strip": false, 00:17:16.986 "in_capsule_data_size": 4096, 00:17:16.986 "io_unit_size": 131072, 00:17:16.986 "max_aq_depth": 128, 00:17:16.986 "max_io_qpairs_per_ctrlr": 127, 00:17:16.986 "max_io_size": 131072, 00:17:16.986 "max_queue_depth": 128, 00:17:16.986 "num_shared_buffers": 511, 00:17:16.986 "sock_priority": 0, 00:17:16.986 "trtype": "TCP", 00:17:16.986 "zcopy": false 00:17:16.986 } 00:17:16.986 }, 00:17:16.986 { 00:17:16.986 "method": "nvmf_create_subsystem", 00:17:16.986 "params": { 00:17:16.986 "allow_any_host": false, 00:17:16.986 "ana_reporting": false, 00:17:16.986 "max_cntlid": 65519, 00:17:16.986 "max_namespaces": 32, 00:17:16.986 "min_cntlid": 1, 00:17:16.986 "model_number": "SPDK bdev Controller", 00:17:16.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.986 "serial_number": "00000000000000000000" 00:17:16.986 } 00:17:16.986 }, 00:17:16.986 { 00:17:16.986 "method": "nvmf_subsystem_add_host", 00:17:16.986 "params": { 00:17:16.986 "host": "nqn.2016-06.io.spdk:host1", 00:17:16.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.986 "psk": "key0" 00:17:16.986 } 00:17:16.986 }, 00:17:16.986 { 00:17:16.986 "method": "nvmf_subsystem_add_ns", 00:17:16.986 "params": { 00:17:16.986 "namespace": { 00:17:16.986 "bdev_name": "malloc0", 00:17:16.986 "nguid": "17334B96844240B7BCCB1CBD44F184DC", 00:17:16.986 "no_auto_visible": false, 00:17:16.986 "nsid": 1, 00:17:16.986 "uuid": "17334b96-8442-40b7-bccb-1cbd44f184dc" 00:17:16.986 }, 00:17:16.986 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:16.986 } 00:17:16.986 }, 00:17:16.986 { 00:17:16.986 "method": "nvmf_subsystem_add_listener", 00:17:16.986 "params": { 00:17:16.986 "listen_address": { 00:17:16.986 "adrfam": "IPv4", 00:17:16.986 "traddr": "10.0.0.2", 00:17:16.986 "trsvcid": "4420", 00:17:16.986 "trtype": "TCP" 00:17:16.986 }, 00:17:16.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.986 "secure_channel": true 00:17:16.986 } 00:17:16.986 } 00:17:16.986 ] 00:17:16.986 } 00:17:16.986 ] 00:17:16.986 }' 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85308 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85308 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85308 ']' 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.986 14:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 [2024-07-12 14:56:55.534736] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:16.986 [2024-07-12 14:56:55.534843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.245 [2024-07-12 14:56:55.675263] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.245 [2024-07-12 14:56:55.734797] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.245 [2024-07-12 14:56:55.734865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.245 [2024-07-12 14:56:55.734884] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.245 [2024-07-12 14:56:55.734893] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.245 [2024-07-12 14:56:55.734900] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.245 [2024-07-12 14:56:55.734982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.504 [2024-07-12 14:56:55.926759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.504 [2024-07-12 14:56:55.958728] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.504 [2024-07-12 14:56:55.959003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85352 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85352 /var/tmp/bdevperf.sock 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85352 ']' 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:18.078 14:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:17:18.078 "subsystems": [ 00:17:18.078 { 00:17:18.078 "subsystem": "keyring", 00:17:18.078 "config": [ 00:17:18.078 { 00:17:18.078 "method": "keyring_file_add_key", 00:17:18.078 "params": { 00:17:18.078 "name": "key0", 00:17:18.078 "path": "/tmp/tmp.ZbN0wLiCj5" 00:17:18.078 } 00:17:18.078 } 00:17:18.078 ] 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "subsystem": "iobuf", 00:17:18.078 "config": [ 00:17:18.078 { 00:17:18.078 "method": "iobuf_set_options", 00:17:18.078 "params": { 00:17:18.078 "large_bufsize": 135168, 00:17:18.078 "large_pool_count": 1024, 00:17:18.078 "small_bufsize": 8192, 00:17:18.078 "small_pool_count": 8192 00:17:18.078 } 00:17:18.078 } 00:17:18.078 ] 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "subsystem": "sock", 00:17:18.078 "config": [ 00:17:18.078 { 00:17:18.078 "method": "sock_set_default_impl", 00:17:18.078 "params": { 00:17:18.078 "impl_name": "posix" 00:17:18.078 } 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "method": "sock_impl_set_options", 00:17:18.078 "params": { 00:17:18.078 "enable_ktls": false, 00:17:18.078 "enable_placement_id": 0, 00:17:18.078 "enable_quickack": false, 00:17:18.078 "enable_recv_pipe": true, 00:17:18.078 "enable_zerocopy_send_client": false, 00:17:18.078 "enable_zerocopy_send_server": true, 00:17:18.078 "impl_name": "ssl", 00:17:18.078 "recv_buf_size": 4096, 00:17:18.078 "send_buf_size": 4096, 00:17:18.078 "tls_version": 0, 00:17:18.078 "zerocopy_threshold": 0 00:17:18.078 } 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "method": "sock_impl_set_options", 00:17:18.078 "params": { 00:17:18.078 "enable_ktls": false, 00:17:18.078 "enable_placement_id": 0, 00:17:18.078 "enable_quickack": false, 00:17:18.078 "enable_recv_pipe": true, 00:17:18.078 "enable_zerocopy_send_client": false, 00:17:18.078 "enable_zerocopy_send_server": true, 00:17:18.078 "impl_name": "posix", 00:17:18.078 "recv_buf_size": 2097152, 00:17:18.078 "send_buf_size": 2097152, 00:17:18.078 "tls_version": 0, 00:17:18.078 "zerocopy_threshold": 0 00:17:18.078 } 00:17:18.078 } 00:17:18.078 ] 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "subsystem": "vmd", 00:17:18.078 "config": [] 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "subsystem": "accel", 00:17:18.078 "config": [ 00:17:18.078 { 00:17:18.078 "method": "accel_set_options", 00:17:18.078 "params": { 00:17:18.078 "buf_count": 2048, 00:17:18.078 "large_cache_size": 16, 00:17:18.078 "sequence_count": 2048, 00:17:18.078 "small_cache_size": 128, 00:17:18.078 "task_count": 2048 00:17:18.078 } 00:17:18.078 } 00:17:18.078 ] 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "subsystem": "bdev", 00:17:18.078 "config": [ 00:17:18.078 { 00:17:18.078 "method": "bdev_set_options", 00:17:18.078 "params": { 00:17:18.078 "bdev_auto_examine": true, 00:17:18.078 "bdev_io_cache_size": 256, 00:17:18.078 "bdev_io_pool_size": 65535, 00:17:18.078 "iobuf_large_cache_size": 16, 00:17:18.078 "iobuf_small_cache_size": 128 00:17:18.078 } 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "method": "bdev_raid_set_options", 00:17:18.078 "params": { 00:17:18.078 "process_window_size_kb": 1024 00:17:18.078 } 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "method": "bdev_iscsi_set_options", 00:17:18.078 "params": { 00:17:18.078 "timeout_sec": 30 00:17:18.078 } 00:17:18.078 }, 00:17:18.078 { 00:17:18.078 "method": "bdev_nvme_set_options", 00:17:18.078 "params": { 00:17:18.078 "action_on_timeout": "none", 00:17:18.078 "allow_accel_sequence": false, 00:17:18.078 "arbitration_burst": 0, 00:17:18.078 "bdev_retry_count": 3, 00:17:18.078 "ctrlr_loss_timeout_sec": 0, 00:17:18.078 "delay_cmd_submit": true, 00:17:18.078 "dhchap_dhgroups": [ 00:17:18.078 "null", 00:17:18.078 "ffdhe2048", 00:17:18.078 "ffdhe3072", 00:17:18.078 "ffdhe4096", 00:17:18.078 "ffdhe6144", 00:17:18.078 "ffdhe8192" 00:17:18.078 ], 00:17:18.078 "dhchap_digests": [ 00:17:18.078 "sha256", 00:17:18.078 "sha384", 00:17:18.078 "sha512" 00:17:18.078 ], 00:17:18.078 "disable_auto_failback": false, 00:17:18.078 "fast_io_fail_timeout_sec": 0, 00:17:18.078 "generate_uuids": false, 00:17:18.078 "high_priority_weight": 0, 00:17:18.078 "io_path_stat": false, 00:17:18.078 "io_queue_requests": 512, 00:17:18.078 "keep_alive_timeout_ms": 10000, 00:17:18.078 "low_priority_weight": 0, 00:17:18.078 "medium_priority_weight": 0, 00:17:18.078 "nvme_adminq_poll_period_us": 10000, 00:17:18.078 "nvme_error_stat": false, 00:17:18.078 "nvme_ioq_poll_period_us": 0, 00:17:18.078 "rdma_cm_event_timeout_ms": 0, 00:17:18.078 "rdma_max_cq_size": 0, 00:17:18.078 "rdma_srq_size": 0, 00:17:18.078 "reconnect_delay_sec": 0, 00:17:18.079 "timeout_admin_us": 0, 00:17:18.079 "timeout_us": 0, 00:17:18.079 "transport_ack_timeout": 0, 00:17:18.079 "transport_retry_count": 4, 00:17:18.079 "transport_tos": 0 00:17:18.079 } 00:17:18.079 }, 00:17:18.079 { 00:17:18.079 "method": "bdev_nvme_attach_controller", 00:17:18.079 "params": { 00:17:18.079 "adrfam": "IPv4", 00:17:18.079 "ctrlr_loss_timeout_sec": 0, 00:17:18.079 "ddgst": false, 00:17:18.079 "fast_io_fail_timeout_sec": 0, 00:17:18.079 "hdgst": false, 00:17:18.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.079 "name": "nvme0", 00:17:18.079 "prchk_guard": false, 00:17:18.079 "prchk_reftag": false, 00:17:18.079 "psk": "key0", 00:17:18.079 "reconnect_delay_sec": 0, 00:17:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.079 "traddr": "10.0.0.2", 00:17:18.079 "trsvcid": "4420", 00:17:18.079 "trtype": "TCP" 00:17:18.079 } 00:17:18.079 }, 00:17:18.079 { 00:17:18.079 "method": "bdev_nvme_set_hotplug", 00:17:18.079 "params": { 00:17:18.079 "enable": false, 00:17:18.079 "period_us": 100000 00:17:18.079 } 00:17:18.079 }, 00:17:18.079 { 00:17:18.079 "method": "bdev_enable_histogram", 00:17:18.079 "params": { 00:17:18.079 "enable": true, 00:17:18.079 "name": "nvme0n1" 00:17:18.079 } 00:17:18.079 }, 00:17:18.079 { 00:17:18.079 "method": "bdev_wait_for_examine" 00:17:18.079 } 00:17:18.079 ] 00:17:18.079 }, 00:17:18.079 { 00:17:18.079 "subsystem": "nbd", 00:17:18.079 "config": [] 00:17:18.079 } 00:17:18.079 ] 00:17:18.079 }' 00:17:18.079 [2024-07-12 14:56:56.652608] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:18.079 [2024-07-12 14:56:56.652707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85352 ] 00:17:18.352 [2024-07-12 14:56:56.788810] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.352 [2024-07-12 14:56:56.860035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.610 [2024-07-12 14:56:57.011348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.178 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.178 14:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.178 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.178 14:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:19.436 14:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.436 14:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.694 Running I/O for 1 seconds... 00:17:20.627 00:17:20.627 Latency(us) 00:17:20.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.627 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:20.627 Verification LBA range: start 0x0 length 0x2000 00:17:20.627 nvme0n1 : 1.02 3833.79 14.98 0.00 0.00 33017.18 6494.02 26571.87 00:17:20.627 =================================================================================================================== 00:17:20.627 Total : 3833.79 14.98 0.00 0.00 33017.18 6494.02 26571.87 00:17:20.627 0 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:20.627 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:20.627 nvmf_trace.0 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85352 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85352 ']' 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85352 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85352 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:20.886 killing process with pid 85352 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85352' 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85352 00:17:20.886 Received shutdown signal, test time was about 1.000000 seconds 00:17:20.886 00:17:20.886 Latency(us) 00:17:20.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.886 =================================================================================================================== 00:17:20.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85352 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.886 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.145 rmmod nvme_tcp 00:17:21.145 rmmod nvme_fabrics 00:17:21.145 rmmod nvme_keyring 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85308 ']' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85308 ']' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.145 killing process with pid 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85308' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85308 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.145 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.404 14:56:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:21.404 14:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tPtBLjTxoJ /tmp/tmp.SpkQznurBD /tmp/tmp.ZbN0wLiCj5 00:17:21.404 00:17:21.404 real 1m24.701s 00:17:21.404 user 2m17.322s 00:17:21.404 sys 0m27.024s 00:17:21.404 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.404 14:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.404 ************************************ 00:17:21.404 END TEST nvmf_tls 00:17:21.404 ************************************ 00:17:21.404 14:56:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:21.404 14:56:59 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:21.404 14:56:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:21.404 14:56:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.404 14:56:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.404 ************************************ 00:17:21.404 START TEST nvmf_fips 00:17:21.404 ************************************ 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:21.404 * Looking for test storage... 00:17:21.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.404 14:56:59 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:21.405 14:56:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:21.405 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:21.663 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:21.664 Error setting digest 00:17:21.664 00521B9D4C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:21.664 00521B9D4C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:21.664 Cannot find device "nvmf_tgt_br" 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.664 Cannot find device "nvmf_tgt_br2" 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:21.664 Cannot find device "nvmf_tgt_br" 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:21.664 Cannot find device "nvmf_tgt_br2" 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.664 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:21.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:17:21.922 00:17:21.922 --- 10.0.0.2 ping statistics --- 00:17:21.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.922 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:21.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:21.922 00:17:21.922 --- 10.0.0.3 ping statistics --- 00:17:21.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.922 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:21.922 00:17:21.922 --- 10.0.0.1 ping statistics --- 00:17:21.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.922 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85638 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85638 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85638 ']' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.922 14:57:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.180 [2024-07-12 14:57:00.601391] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:22.180 [2024-07-12 14:57:00.601488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.180 [2024-07-12 14:57:00.741178] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.180 [2024-07-12 14:57:00.822843] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.180 [2024-07-12 14:57:00.822907] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.180 [2024-07-12 14:57:00.822920] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.180 [2024-07-12 14:57:00.822929] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.180 [2024-07-12 14:57:00.822936] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.180 [2024-07-12 14:57:00.822964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:23.113 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.371 [2024-07-12 14:57:01.910803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.371 [2024-07-12 14:57:01.926744] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.371 [2024-07-12 14:57:01.926971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.371 [2024-07-12 14:57:01.953540] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:23.371 malloc0 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85697 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85697 /var/tmp/bdevperf.sock 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85697 ']' 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.371 14:57:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:23.628 [2024-07-12 14:57:02.068393] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:23.628 [2024-07-12 14:57:02.068535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85697 ] 00:17:23.628 [2024-07-12 14:57:02.206163] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.628 [2024-07-12 14:57:02.278094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.649 14:57:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.649 14:57:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:24.649 14:57:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:24.907 [2024-07-12 14:57:03.347182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.907 [2024-07-12 14:57:03.347292] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.907 TLSTESTn1 00:17:24.908 14:57:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.165 Running I/O for 10 seconds... 00:17:35.138 00:17:35.138 Latency(us) 00:17:35.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:35.138 Verification LBA range: start 0x0 length 0x2000 00:17:35.138 TLSTESTn1 : 10.02 3887.86 15.19 0.00 0.00 32859.82 6404.65 33363.78 00:17:35.138 =================================================================================================================== 00:17:35.138 Total : 3887.86 15.19 0.00 0.00 32859.82 6404.65 33363.78 00:17:35.138 0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:35.138 nvmf_trace.0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85697 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85697 ']' 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85697 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85697 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:35.138 killing process with pid 85697 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85697' 00:17:35.138 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85697 00:17:35.138 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.138 00:17:35.138 Latency(us) 00:17:35.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.138 =================================================================================================================== 00:17:35.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.139 [2024-07-12 14:57:13.728130] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:35.139 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85697 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.397 rmmod nvme_tcp 00:17:35.397 rmmod nvme_fabrics 00:17:35.397 rmmod nvme_keyring 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85638 ']' 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85638 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85638 ']' 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85638 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85638 00:17:35.397 14:57:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.397 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.397 killing process with pid 85638 00:17:35.397 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85638' 00:17:35.397 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85638 00:17:35.397 [2024-07-12 14:57:14.002612] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:35.397 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85638 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.656 00:17:35.656 real 0m14.323s 00:17:35.656 user 0m19.832s 00:17:35.656 sys 0m5.636s 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.656 14:57:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:35.656 ************************************ 00:17:35.656 END TEST nvmf_fips 00:17:35.656 ************************************ 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:17:35.656 14:57:14 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.656 14:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.656 ************************************ 00:17:35.656 START TEST nvmf_multicontroller 00:17:35.656 ************************************ 00:17:35.656 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:35.916 * Looking for test storage... 00:17:35.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.916 Cannot find device "nvmf_tgt_br" 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.916 Cannot find device "nvmf_tgt_br2" 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.916 Cannot find device "nvmf_tgt_br" 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.916 Cannot find device "nvmf_tgt_br2" 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.916 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:36.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:36.175 00:17:36.175 --- 10.0.0.2 ping statistics --- 00:17:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.175 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:36.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:36.175 00:17:36.175 --- 10.0.0.3 ping statistics --- 00:17:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.175 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:36.175 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:17:36.176 00:17:36.176 --- 10.0.0.1 ping statistics --- 00:17:36.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.176 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86057 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86057 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86057 ']' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.176 14:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.176 [2024-07-12 14:57:14.807769] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:36.176 [2024-07-12 14:57:14.807862] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.435 [2024-07-12 14:57:14.944281] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.435 [2024-07-12 14:57:15.004020] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.435 [2024-07-12 14:57:15.004072] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.435 [2024-07-12 14:57:15.004085] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.435 [2024-07-12 14:57:15.004094] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.435 [2024-07-12 14:57:15.004101] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.435 [2024-07-12 14:57:15.004236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.435 [2024-07-12 14:57:15.004390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.435 [2024-07-12 14:57:15.005020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 [2024-07-12 14:57:15.137323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 Malloc0 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 [2024-07-12 14:57:15.206430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 [2024-07-12 14:57:15.214394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 Malloc1 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86096 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86096 /var/tmp/bdevperf.sock 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86096 ']' 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.694 14:57:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.065 NVMe0n1 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.065 1 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.065 2024/07/12 14:57:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:38.065 request: 00:17:38.065 { 00:17:38.065 "method": "bdev_nvme_attach_controller", 00:17:38.065 "params": { 00:17:38.065 "name": "NVMe0", 00:17:38.065 "trtype": "tcp", 00:17:38.065 "traddr": "10.0.0.2", 00:17:38.065 "adrfam": "ipv4", 00:17:38.065 "trsvcid": "4420", 00:17:38.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.065 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:38.065 "hostaddr": "10.0.0.2", 00:17:38.065 "hostsvcid": "60000", 00:17:38.065 "prchk_reftag": false, 00:17:38.065 "prchk_guard": false, 00:17:38.065 "hdgst": false, 00:17:38.065 "ddgst": false 00:17:38.065 } 00:17:38.065 } 00:17:38.065 Got JSON-RPC error response 00:17:38.065 GoRPCClient: error on JSON-RPC call 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:38.065 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 2024/07/12 14:57:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:38.066 request: 00:17:38.066 { 00:17:38.066 "method": "bdev_nvme_attach_controller", 00:17:38.066 "params": { 00:17:38.066 "name": "NVMe0", 00:17:38.066 "trtype": "tcp", 00:17:38.066 "traddr": "10.0.0.2", 00:17:38.066 "adrfam": "ipv4", 00:17:38.066 "trsvcid": "4420", 00:17:38.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:38.066 "hostaddr": "10.0.0.2", 00:17:38.066 "hostsvcid": "60000", 00:17:38.066 "prchk_reftag": false, 00:17:38.066 "prchk_guard": false, 00:17:38.066 "hdgst": false, 00:17:38.066 "ddgst": false 00:17:38.066 } 00:17:38.066 } 00:17:38.066 Got JSON-RPC error response 00:17:38.066 GoRPCClient: error on JSON-RPC call 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 2024/07/12 14:57:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:38.066 request: 00:17:38.066 { 00:17:38.066 "method": "bdev_nvme_attach_controller", 00:17:38.066 "params": { 00:17:38.066 "name": "NVMe0", 00:17:38.066 "trtype": "tcp", 00:17:38.066 "traddr": "10.0.0.2", 00:17:38.066 "adrfam": "ipv4", 00:17:38.066 "trsvcid": "4420", 00:17:38.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.066 "hostaddr": "10.0.0.2", 00:17:38.066 "hostsvcid": "60000", 00:17:38.066 "prchk_reftag": false, 00:17:38.066 "prchk_guard": false, 00:17:38.066 "hdgst": false, 00:17:38.066 "ddgst": false, 00:17:38.066 "multipath": "disable" 00:17:38.066 } 00:17:38.066 } 00:17:38.066 Got JSON-RPC error response 00:17:38.066 GoRPCClient: error on JSON-RPC call 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 2024/07/12 14:57:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:38.066 request: 00:17:38.066 { 00:17:38.066 "method": "bdev_nvme_attach_controller", 00:17:38.066 "params": { 00:17:38.066 "name": "NVMe0", 00:17:38.066 "trtype": "tcp", 00:17:38.066 "traddr": "10.0.0.2", 00:17:38.066 "adrfam": "ipv4", 00:17:38.066 "trsvcid": "4420", 00:17:38.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.066 "hostaddr": "10.0.0.2", 00:17:38.066 "hostsvcid": "60000", 00:17:38.066 "prchk_reftag": false, 00:17:38.066 "prchk_guard": false, 00:17:38.066 "hdgst": false, 00:17:38.066 "ddgst": false, 00:17:38.066 "multipath": "failover" 00:17:38.066 } 00:17:38.066 } 00:17:38.066 Got JSON-RPC error response 00:17:38.066 GoRPCClient: error on JSON-RPC call 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:38.066 14:57:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:39.438 0 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86096 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86096 ']' 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86096 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86096 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86096' 00:17:39.438 killing process with pid 86096 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86096 00:17:39.438 14:57:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86096 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:17:39.438 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:39.438 [2024-07-12 14:57:15.327781] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:39.438 [2024-07-12 14:57:15.327909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86096 ] 00:17:39.438 [2024-07-12 14:57:15.467530] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.438 [2024-07-12 14:57:15.537747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.438 [2024-07-12 14:57:16.635285] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 984c3841-146b-4258-876b-49ffcaf2ec77 already exists 00:17:39.438 [2024-07-12 14:57:16.635339] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:984c3841-146b-4258-876b-49ffcaf2ec77 alias for bdev NVMe1n1 00:17:39.438 [2024-07-12 14:57:16.635357] bdev_nvme.c:4325:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:39.438 Running I/O for 1 seconds... 00:17:39.438 00:17:39.438 Latency(us) 00:17:39.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.438 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:39.438 NVMe0n1 : 1.01 17564.13 68.61 0.00 0.00 7262.78 3783.21 18350.08 00:17:39.438 =================================================================================================================== 00:17:39.438 Total : 17564.13 68.61 0.00 0.00 7262.78 3783.21 18350.08 00:17:39.438 Received shutdown signal, test time was about 1.000000 seconds 00:17:39.438 00:17:39.438 Latency(us) 00:17:39.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.438 =================================================================================================================== 00:17:39.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.438 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.438 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.438 rmmod nvme_tcp 00:17:39.438 rmmod nvme_fabrics 00:17:39.696 rmmod nvme_keyring 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86057 ']' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86057 ']' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86057' 00:17:39.696 killing process with pid 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86057 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.696 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.953 14:57:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:39.953 00:17:39.953 real 0m4.079s 00:17:39.953 user 0m13.005s 00:17:39.953 sys 0m0.915s 00:17:39.953 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.953 ************************************ 00:17:39.953 14:57:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:39.953 END TEST nvmf_multicontroller 00:17:39.953 ************************************ 00:17:39.953 14:57:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:39.953 14:57:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:39.953 14:57:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.953 14:57:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.953 14:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.953 ************************************ 00:17:39.953 START TEST nvmf_aer 00:17:39.953 ************************************ 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:39.953 * Looking for test storage... 00:17:39.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.953 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:39.954 Cannot find device "nvmf_tgt_br" 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.954 Cannot find device "nvmf_tgt_br2" 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:39.954 Cannot find device "nvmf_tgt_br" 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:39.954 Cannot find device "nvmf_tgt_br2" 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:17:39.954 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:40.211 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:40.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:17:40.212 00:17:40.212 --- 10.0.0.2 ping statistics --- 00:17:40.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.212 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:40.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:40.212 00:17:40.212 --- 10.0.0.3 ping statistics --- 00:17:40.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.212 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:40.212 00:17:40.212 --- 10.0.0.1 ping statistics --- 00:17:40.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.212 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86337 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86337 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86337 ']' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.212 14:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:40.469 [2024-07-12 14:57:18.905269] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:40.469 [2024-07-12 14:57:18.905358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.469 [2024-07-12 14:57:19.038457] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.469 [2024-07-12 14:57:19.099268] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.469 [2024-07-12 14:57:19.099318] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.469 [2024-07-12 14:57:19.099330] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.469 [2024-07-12 14:57:19.099339] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.469 [2024-07-12 14:57:19.099346] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.469 [2024-07-12 14:57:19.099439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.469 [2024-07-12 14:57:19.100083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.469 [2024-07-12 14:57:19.100193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.469 [2024-07-12 14:57:19.100201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 [2024-07-12 14:57:19.870730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 Malloc0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 [2024-07-12 14:57:19.924477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 [ 00:17:41.401 { 00:17:41.401 "allow_any_host": true, 00:17:41.401 "hosts": [], 00:17:41.401 "listen_addresses": [], 00:17:41.401 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:41.401 "subtype": "Discovery" 00:17:41.401 }, 00:17:41.401 { 00:17:41.401 "allow_any_host": true, 00:17:41.401 "hosts": [], 00:17:41.401 "listen_addresses": [ 00:17:41.401 { 00:17:41.401 "adrfam": "IPv4", 00:17:41.401 "traddr": "10.0.0.2", 00:17:41.401 "trsvcid": "4420", 00:17:41.401 "trtype": "TCP" 00:17:41.401 } 00:17:41.401 ], 00:17:41.401 "max_cntlid": 65519, 00:17:41.401 "max_namespaces": 2, 00:17:41.401 "min_cntlid": 1, 00:17:41.401 "model_number": "SPDK bdev Controller", 00:17:41.401 "namespaces": [ 00:17:41.401 { 00:17:41.401 "bdev_name": "Malloc0", 00:17:41.401 "name": "Malloc0", 00:17:41.401 "nguid": "EEE36293F04640809DE10A5D5CCF1B10", 00:17:41.401 "nsid": 1, 00:17:41.401 "uuid": "eee36293-f046-4080-9de1-0a5d5ccf1b10" 00:17:41.401 } 00:17:41.401 ], 00:17:41.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.401 "serial_number": "SPDK00000000000001", 00:17:41.401 "subtype": "NVMe" 00:17:41.401 } 00:17:41.401 ] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86391 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:17:41.401 14:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:41.401 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:41.401 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:41.401 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:17:41.401 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.659 Malloc1 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:41.659 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.660 Asynchronous Event Request test 00:17:41.660 Attaching to 10.0.0.2 00:17:41.660 Attached to 10.0.0.2 00:17:41.660 Registering asynchronous event callbacks... 00:17:41.660 Starting namespace attribute notice tests for all controllers... 00:17:41.660 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:41.660 aer_cb - Changed Namespace 00:17:41.660 Cleaning up... 00:17:41.660 [ 00:17:41.660 { 00:17:41.660 "allow_any_host": true, 00:17:41.660 "hosts": [], 00:17:41.660 "listen_addresses": [], 00:17:41.660 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:41.660 "subtype": "Discovery" 00:17:41.660 }, 00:17:41.660 { 00:17:41.660 "allow_any_host": true, 00:17:41.660 "hosts": [], 00:17:41.660 "listen_addresses": [ 00:17:41.660 { 00:17:41.660 "adrfam": "IPv4", 00:17:41.660 "traddr": "10.0.0.2", 00:17:41.660 "trsvcid": "4420", 00:17:41.660 "trtype": "TCP" 00:17:41.660 } 00:17:41.660 ], 00:17:41.660 "max_cntlid": 65519, 00:17:41.660 "max_namespaces": 2, 00:17:41.660 "min_cntlid": 1, 00:17:41.660 "model_number": "SPDK bdev Controller", 00:17:41.660 "namespaces": [ 00:17:41.660 { 00:17:41.660 "bdev_name": "Malloc0", 00:17:41.660 "name": "Malloc0", 00:17:41.660 "nguid": "EEE36293F04640809DE10A5D5CCF1B10", 00:17:41.660 "nsid": 1, 00:17:41.660 "uuid": "eee36293-f046-4080-9de1-0a5d5ccf1b10" 00:17:41.660 }, 00:17:41.660 { 00:17:41.660 "bdev_name": "Malloc1", 00:17:41.660 "name": "Malloc1", 00:17:41.660 "nguid": "E469B085E4714831BA176D2E19EBB2DC", 00:17:41.660 "nsid": 2, 00:17:41.660 "uuid": "e469b085-e471-4831-ba17-6d2e19ebb2dc" 00:17:41.660 } 00:17:41.660 ], 00:17:41.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.660 "serial_number": "SPDK00000000000001", 00:17:41.660 "subtype": "NVMe" 00:17:41.660 } 00:17:41.660 ] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86391 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.660 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.919 rmmod nvme_tcp 00:17:41.919 rmmod nvme_fabrics 00:17:41.919 rmmod nvme_keyring 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86337 ']' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86337 ']' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.919 killing process with pid 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86337' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86337 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.919 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.178 14:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.178 ************************************ 00:17:42.178 END TEST nvmf_aer 00:17:42.178 ************************************ 00:17:42.178 00:17:42.178 real 0m2.165s 00:17:42.178 user 0m5.968s 00:17:42.178 sys 0m0.543s 00:17:42.178 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.178 14:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:42.178 14:57:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:42.178 14:57:20 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:42.178 14:57:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.178 14:57:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.178 14:57:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.178 ************************************ 00:17:42.178 START TEST nvmf_async_init 00:17:42.178 ************************************ 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:42.178 * Looking for test storage... 00:17:42.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.178 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4580d4036f8f4f4e996615e4ef7ea356 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:42.179 Cannot find device "nvmf_tgt_br" 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.179 Cannot find device "nvmf_tgt_br2" 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:42.179 Cannot find device "nvmf_tgt_br" 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:42.179 Cannot find device "nvmf_tgt_br2" 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:42.179 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:42.438 14:57:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:42.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:17:42.438 00:17:42.438 --- 10.0.0.2 ping statistics --- 00:17:42.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.438 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:42.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:42.438 00:17:42.438 --- 10.0.0.3 ping statistics --- 00:17:42.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.438 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:42.438 00:17:42.438 --- 10.0.0.1 ping statistics --- 00:17:42.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.438 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.438 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86563 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86563 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86563 ']' 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.697 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.697 [2024-07-12 14:57:21.148233] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:42.697 [2024-07-12 14:57:21.148835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.697 [2024-07-12 14:57:21.287549] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.956 [2024-07-12 14:57:21.355895] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.956 [2024-07-12 14:57:21.355957] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.956 [2024-07-12 14:57:21.355971] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.956 [2024-07-12 14:57:21.355981] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.956 [2024-07-12 14:57:21.355990] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.956 [2024-07-12 14:57:21.356018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 [2024-07-12 14:57:21.492888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 null0 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4580d4036f8f4f4e996615e4ef7ea356 00:17:42.956 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:42.957 [2024-07-12 14:57:21.532989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.957 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.215 nvme0n1 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.215 [ 00:17:43.215 { 00:17:43.215 "aliases": [ 00:17:43.215 "4580d403-6f8f-4f4e-9966-15e4ef7ea356" 00:17:43.215 ], 00:17:43.215 "assigned_rate_limits": { 00:17:43.215 "r_mbytes_per_sec": 0, 00:17:43.215 "rw_ios_per_sec": 0, 00:17:43.215 "rw_mbytes_per_sec": 0, 00:17:43.215 "w_mbytes_per_sec": 0 00:17:43.215 }, 00:17:43.215 "block_size": 512, 00:17:43.215 "claimed": false, 00:17:43.215 "driver_specific": { 00:17:43.215 "mp_policy": "active_passive", 00:17:43.215 "nvme": [ 00:17:43.215 { 00:17:43.215 "ctrlr_data": { 00:17:43.215 "ana_reporting": false, 00:17:43.215 "cntlid": 1, 00:17:43.215 "firmware_revision": "24.09", 00:17:43.215 "model_number": "SPDK bdev Controller", 00:17:43.215 "multi_ctrlr": true, 00:17:43.215 "oacs": { 00:17:43.215 "firmware": 0, 00:17:43.215 "format": 0, 00:17:43.215 "ns_manage": 0, 00:17:43.215 "security": 0 00:17:43.215 }, 00:17:43.215 "serial_number": "00000000000000000000", 00:17:43.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.215 "vendor_id": "0x8086" 00:17:43.215 }, 00:17:43.215 "ns_data": { 00:17:43.215 "can_share": true, 00:17:43.215 "id": 1 00:17:43.215 }, 00:17:43.215 "trid": { 00:17:43.215 "adrfam": "IPv4", 00:17:43.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.215 "traddr": "10.0.0.2", 00:17:43.215 "trsvcid": "4420", 00:17:43.215 "trtype": "TCP" 00:17:43.215 }, 00:17:43.215 "vs": { 00:17:43.215 "nvme_version": "1.3" 00:17:43.215 } 00:17:43.215 } 00:17:43.215 ] 00:17:43.215 }, 00:17:43.215 "memory_domains": [ 00:17:43.215 { 00:17:43.215 "dma_device_id": "system", 00:17:43.215 "dma_device_type": 1 00:17:43.215 } 00:17:43.215 ], 00:17:43.215 "name": "nvme0n1", 00:17:43.215 "num_blocks": 2097152, 00:17:43.215 "product_name": "NVMe disk", 00:17:43.215 "supported_io_types": { 00:17:43.215 "abort": true, 00:17:43.215 "compare": true, 00:17:43.215 "compare_and_write": true, 00:17:43.215 "copy": true, 00:17:43.215 "flush": true, 00:17:43.215 "get_zone_info": false, 00:17:43.215 "nvme_admin": true, 00:17:43.215 "nvme_io": true, 00:17:43.215 "nvme_io_md": false, 00:17:43.215 "nvme_iov_md": false, 00:17:43.215 "read": true, 00:17:43.215 "reset": true, 00:17:43.215 "seek_data": false, 00:17:43.215 "seek_hole": false, 00:17:43.215 "unmap": false, 00:17:43.215 "write": true, 00:17:43.215 "write_zeroes": true, 00:17:43.215 "zcopy": false, 00:17:43.215 "zone_append": false, 00:17:43.215 "zone_management": false 00:17:43.215 }, 00:17:43.215 "uuid": "4580d403-6f8f-4f4e-9966-15e4ef7ea356", 00:17:43.215 "zoned": false 00:17:43.215 } 00:17:43.215 ] 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.215 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.215 [2024-07-12 14:57:21.808961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:43.215 [2024-07-12 14:57:21.809106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cbc20 (9): Bad file descriptor 00:17:43.474 [2024-07-12 14:57:21.940763] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 [ 00:17:43.474 { 00:17:43.474 "aliases": [ 00:17:43.474 "4580d403-6f8f-4f4e-9966-15e4ef7ea356" 00:17:43.474 ], 00:17:43.474 "assigned_rate_limits": { 00:17:43.474 "r_mbytes_per_sec": 0, 00:17:43.474 "rw_ios_per_sec": 0, 00:17:43.474 "rw_mbytes_per_sec": 0, 00:17:43.474 "w_mbytes_per_sec": 0 00:17:43.474 }, 00:17:43.474 "block_size": 512, 00:17:43.474 "claimed": false, 00:17:43.474 "driver_specific": { 00:17:43.474 "mp_policy": "active_passive", 00:17:43.474 "nvme": [ 00:17:43.474 { 00:17:43.474 "ctrlr_data": { 00:17:43.474 "ana_reporting": false, 00:17:43.474 "cntlid": 2, 00:17:43.474 "firmware_revision": "24.09", 00:17:43.474 "model_number": "SPDK bdev Controller", 00:17:43.474 "multi_ctrlr": true, 00:17:43.474 "oacs": { 00:17:43.474 "firmware": 0, 00:17:43.474 "format": 0, 00:17:43.474 "ns_manage": 0, 00:17:43.474 "security": 0 00:17:43.474 }, 00:17:43.474 "serial_number": "00000000000000000000", 00:17:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.474 "vendor_id": "0x8086" 00:17:43.474 }, 00:17:43.474 "ns_data": { 00:17:43.474 "can_share": true, 00:17:43.474 "id": 1 00:17:43.474 }, 00:17:43.474 "trid": { 00:17:43.474 "adrfam": "IPv4", 00:17:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.474 "traddr": "10.0.0.2", 00:17:43.474 "trsvcid": "4420", 00:17:43.474 "trtype": "TCP" 00:17:43.474 }, 00:17:43.474 "vs": { 00:17:43.474 "nvme_version": "1.3" 00:17:43.474 } 00:17:43.474 } 00:17:43.474 ] 00:17:43.474 }, 00:17:43.474 "memory_domains": [ 00:17:43.474 { 00:17:43.474 "dma_device_id": "system", 00:17:43.474 "dma_device_type": 1 00:17:43.474 } 00:17:43.474 ], 00:17:43.474 "name": "nvme0n1", 00:17:43.474 "num_blocks": 2097152, 00:17:43.474 "product_name": "NVMe disk", 00:17:43.474 "supported_io_types": { 00:17:43.474 "abort": true, 00:17:43.474 "compare": true, 00:17:43.474 "compare_and_write": true, 00:17:43.474 "copy": true, 00:17:43.474 "flush": true, 00:17:43.474 "get_zone_info": false, 00:17:43.474 "nvme_admin": true, 00:17:43.474 "nvme_io": true, 00:17:43.474 "nvme_io_md": false, 00:17:43.474 "nvme_iov_md": false, 00:17:43.474 "read": true, 00:17:43.474 "reset": true, 00:17:43.474 "seek_data": false, 00:17:43.474 "seek_hole": false, 00:17:43.474 "unmap": false, 00:17:43.474 "write": true, 00:17:43.474 "write_zeroes": true, 00:17:43.474 "zcopy": false, 00:17:43.474 "zone_append": false, 00:17:43.474 "zone_management": false 00:17:43.474 }, 00:17:43.474 "uuid": "4580d403-6f8f-4f4e-9966-15e4ef7ea356", 00:17:43.474 "zoned": false 00:17:43.474 } 00:17:43.474 ] 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.k59u51tUzL 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.k59u51tUzL 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 [2024-07-12 14:57:22.005150] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.474 [2024-07-12 14:57:22.005326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k59u51tUzL 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 [2024-07-12 14:57:22.013121] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k59u51tUzL 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 [2024-07-12 14:57:22.021131] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.474 [2024-07-12 14:57:22.021201] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.474 nvme0n1 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.474 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.474 [ 00:17:43.474 { 00:17:43.474 "aliases": [ 00:17:43.474 "4580d403-6f8f-4f4e-9966-15e4ef7ea356" 00:17:43.474 ], 00:17:43.474 "assigned_rate_limits": { 00:17:43.474 "r_mbytes_per_sec": 0, 00:17:43.474 "rw_ios_per_sec": 0, 00:17:43.474 "rw_mbytes_per_sec": 0, 00:17:43.474 "w_mbytes_per_sec": 0 00:17:43.474 }, 00:17:43.474 "block_size": 512, 00:17:43.474 "claimed": false, 00:17:43.474 "driver_specific": { 00:17:43.474 "mp_policy": "active_passive", 00:17:43.474 "nvme": [ 00:17:43.474 { 00:17:43.474 "ctrlr_data": { 00:17:43.474 "ana_reporting": false, 00:17:43.474 "cntlid": 3, 00:17:43.474 "firmware_revision": "24.09", 00:17:43.474 "model_number": "SPDK bdev Controller", 00:17:43.474 "multi_ctrlr": true, 00:17:43.474 "oacs": { 00:17:43.474 "firmware": 0, 00:17:43.474 "format": 0, 00:17:43.474 "ns_manage": 0, 00:17:43.474 "security": 0 00:17:43.474 }, 00:17:43.474 "serial_number": "00000000000000000000", 00:17:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.474 "vendor_id": "0x8086" 00:17:43.474 }, 00:17:43.474 "ns_data": { 00:17:43.474 "can_share": true, 00:17:43.474 "id": 1 00:17:43.474 }, 00:17:43.474 "trid": { 00:17:43.474 "adrfam": "IPv4", 00:17:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.475 "traddr": "10.0.0.2", 00:17:43.475 "trsvcid": "4421", 00:17:43.475 "trtype": "TCP" 00:17:43.475 }, 00:17:43.475 "vs": { 00:17:43.475 "nvme_version": "1.3" 00:17:43.475 } 00:17:43.475 } 00:17:43.475 ] 00:17:43.475 }, 00:17:43.475 "memory_domains": [ 00:17:43.475 { 00:17:43.475 "dma_device_id": "system", 00:17:43.475 "dma_device_type": 1 00:17:43.475 } 00:17:43.475 ], 00:17:43.475 "name": "nvme0n1", 00:17:43.475 "num_blocks": 2097152, 00:17:43.475 "product_name": "NVMe disk", 00:17:43.475 "supported_io_types": { 00:17:43.475 "abort": true, 00:17:43.475 "compare": true, 00:17:43.475 "compare_and_write": true, 00:17:43.475 "copy": true, 00:17:43.475 "flush": true, 00:17:43.475 "get_zone_info": false, 00:17:43.475 "nvme_admin": true, 00:17:43.475 "nvme_io": true, 00:17:43.475 "nvme_io_md": false, 00:17:43.475 "nvme_iov_md": false, 00:17:43.475 "read": true, 00:17:43.475 "reset": true, 00:17:43.475 "seek_data": false, 00:17:43.475 "seek_hole": false, 00:17:43.475 "unmap": false, 00:17:43.475 "write": true, 00:17:43.475 "write_zeroes": true, 00:17:43.475 "zcopy": false, 00:17:43.475 "zone_append": false, 00:17:43.475 "zone_management": false 00:17:43.475 }, 00:17:43.475 "uuid": "4580d403-6f8f-4f4e-9966-15e4ef7ea356", 00:17:43.475 "zoned": false 00:17:43.475 } 00:17:43.475 ] 00:17:43.475 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.475 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.475 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.475 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.k59u51tUzL 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:43.733 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.734 rmmod nvme_tcp 00:17:43.734 rmmod nvme_fabrics 00:17:43.734 rmmod nvme_keyring 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86563 ']' 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86563 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86563 ']' 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86563 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86563 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:43.734 killing process with pid 86563 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86563' 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86563 00:17:43.734 [2024-07-12 14:57:22.259543] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:43.734 [2024-07-12 14:57:22.259590] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:43.734 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86563 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:43.992 00:17:43.992 real 0m1.819s 00:17:43.992 user 0m1.547s 00:17:43.992 sys 0m0.500s 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.992 14:57:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.992 ************************************ 00:17:43.992 END TEST nvmf_async_init 00:17:43.992 ************************************ 00:17:43.992 14:57:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.992 14:57:22 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:43.992 14:57:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.992 14:57:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.992 14:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.992 ************************************ 00:17:43.992 START TEST dma 00:17:43.992 ************************************ 00:17:43.992 14:57:22 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:43.992 * Looking for test storage... 00:17:43.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:43.992 14:57:22 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.993 14:57:22 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.993 14:57:22 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.993 14:57:22 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.993 14:57:22 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.993 14:57:22 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.993 14:57:22 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.993 14:57:22 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:43.993 14:57:22 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.993 14:57:22 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.993 14:57:22 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:43.993 14:57:22 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:43.993 00:17:43.993 real 0m0.099s 00:17:43.993 user 0m0.045s 00:17:43.993 sys 0m0.059s 00:17:43.993 14:57:22 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.993 ************************************ 00:17:43.993 14:57:22 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:43.993 END TEST dma 00:17:43.993 ************************************ 00:17:43.993 14:57:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.993 14:57:22 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:43.993 14:57:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.993 14:57:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.993 14:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.993 ************************************ 00:17:43.993 START TEST nvmf_identify 00:17:43.993 ************************************ 00:17:43.993 14:57:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:44.251 * Looking for test storage... 00:17:44.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.251 14:57:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:44.252 Cannot find device "nvmf_tgt_br" 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.252 Cannot find device "nvmf_tgt_br2" 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:44.252 Cannot find device "nvmf_tgt_br" 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:44.252 Cannot find device "nvmf_tgt_br2" 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.252 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:44.510 14:57:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:44.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:44.510 00:17:44.510 --- 10.0.0.2 ping statistics --- 00:17:44.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.510 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:44.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:44.510 00:17:44.510 --- 10.0.0.3 ping statistics --- 00:17:44.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.510 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:44.510 00:17:44.510 --- 10.0.0.1 ping statistics --- 00:17:44.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.510 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86815 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86815 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86815 ']' 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.510 14:57:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.768 [2024-07-12 14:57:23.177973] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:44.768 [2024-07-12 14:57:23.178077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.768 [2024-07-12 14:57:23.320799] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.768 [2024-07-12 14:57:23.392029] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.768 [2024-07-12 14:57:23.392090] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.768 [2024-07-12 14:57:23.392104] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.768 [2024-07-12 14:57:23.392115] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.768 [2024-07-12 14:57:23.392124] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.768 [2024-07-12 14:57:23.392307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.768 [2024-07-12 14:57:23.392561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.768 [2024-07-12 14:57:23.392562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.768 [2024-07-12 14:57:23.392409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 [2024-07-12 14:57:24.187067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 Malloc0 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 [2024-07-12 14:57:24.279768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.702 [ 00:17:45.702 { 00:17:45.702 "allow_any_host": true, 00:17:45.702 "hosts": [], 00:17:45.702 "listen_addresses": [ 00:17:45.702 { 00:17:45.702 "adrfam": "IPv4", 00:17:45.702 "traddr": "10.0.0.2", 00:17:45.702 "trsvcid": "4420", 00:17:45.702 "trtype": "TCP" 00:17:45.702 } 00:17:45.702 ], 00:17:45.702 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.702 "subtype": "Discovery" 00:17:45.702 }, 00:17:45.702 { 00:17:45.702 "allow_any_host": true, 00:17:45.702 "hosts": [], 00:17:45.702 "listen_addresses": [ 00:17:45.702 { 00:17:45.702 "adrfam": "IPv4", 00:17:45.702 "traddr": "10.0.0.2", 00:17:45.702 "trsvcid": "4420", 00:17:45.702 "trtype": "TCP" 00:17:45.702 } 00:17:45.702 ], 00:17:45.702 "max_cntlid": 65519, 00:17:45.702 "max_namespaces": 32, 00:17:45.702 "min_cntlid": 1, 00:17:45.702 "model_number": "SPDK bdev Controller", 00:17:45.702 "namespaces": [ 00:17:45.702 { 00:17:45.702 "bdev_name": "Malloc0", 00:17:45.702 "eui64": "ABCDEF0123456789", 00:17:45.702 "name": "Malloc0", 00:17:45.702 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:45.702 "nsid": 1, 00:17:45.702 "uuid": "16fba050-7267-4f80-a293-99093c444f97" 00:17:45.702 } 00:17:45.702 ], 00:17:45.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.702 "serial_number": "SPDK00000000000001", 00:17:45.702 "subtype": "NVMe" 00:17:45.702 } 00:17:45.702 ] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.702 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:45.702 [2024-07-12 14:57:24.332417] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:45.702 [2024-07-12 14:57:24.332473] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86874 ] 00:17:45.963 [2024-07-12 14:57:24.478932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:45.963 [2024-07-12 14:57:24.479010] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:45.963 [2024-07-12 14:57:24.479017] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:45.963 [2024-07-12 14:57:24.479031] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:45.963 [2024-07-12 14:57:24.479039] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:45.963 [2024-07-12 14:57:24.479360] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:45.963 [2024-07-12 14:57:24.479420] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc9fc00 0 00:17:45.963 [2024-07-12 14:57:24.485542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:45.963 [2024-07-12 14:57:24.485571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:45.963 [2024-07-12 14:57:24.485578] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:45.963 [2024-07-12 14:57:24.485582] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:45.963 [2024-07-12 14:57:24.485620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.485628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.485633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.963 [2024-07-12 14:57:24.485652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:45.963 [2024-07-12 14:57:24.485684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.963 [2024-07-12 14:57:24.493539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.963 [2024-07-12 14:57:24.493562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.963 [2024-07-12 14:57:24.493568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.493574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.963 [2024-07-12 14:57:24.493585] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:45.963 [2024-07-12 14:57:24.493594] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:45.963 [2024-07-12 14:57:24.493600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:45.963 [2024-07-12 14:57:24.493619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.493625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.493629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.963 [2024-07-12 14:57:24.493639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.963 [2024-07-12 14:57:24.493672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.963 [2024-07-12 14:57:24.493760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.963 [2024-07-12 14:57:24.493768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.963 [2024-07-12 14:57:24.493772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.963 [2024-07-12 14:57:24.493777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.963 [2024-07-12 14:57:24.493783] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:45.963 [2024-07-12 14:57:24.493792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:45.964 [2024-07-12 14:57:24.493800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.493805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.493809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.493817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.493839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.493897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.493905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.493909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.493914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.493920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:45.964 [2024-07-12 14:57:24.493929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.493937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.493942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.493946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.493954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.493973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.494051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.494062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.494079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.494099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.494174] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:45.964 [2024-07-12 14:57:24.494179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.494193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.494300] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:45.964 [2024-07-12 14:57:24.494306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.494316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.494332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.494351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.494429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:45.964 [2024-07-12 14:57:24.494440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.494457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.494475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.494570] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:45.964 [2024-07-12 14:57:24.494575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:45.964 [2024-07-12 14:57:24.494584] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:45.964 [2024-07-12 14:57:24.494595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:45.964 [2024-07-12 14:57:24.494607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.494619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.964 [2024-07-12 14:57:24.494642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.964 [2024-07-12 14:57:24.494758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.964 [2024-07-12 14:57:24.494763] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9fc00): datao=0, datal=4096, cccid=0 00:17:45.964 [2024-07-12 14:57:24.494773] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xce29c0) on tqpair(0xc9fc00): expected_datao=0, payload_size=4096 00:17:45.964 [2024-07-12 14:57:24.494778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494787] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494793] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.494827] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:45.964 [2024-07-12 14:57:24.494833] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:45.964 [2024-07-12 14:57:24.494842] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:45.964 [2024-07-12 14:57:24.494848] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:45.964 [2024-07-12 14:57:24.494853] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:45.964 [2024-07-12 14:57:24.494859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:45.964 [2024-07-12 14:57:24.494868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:45.964 [2024-07-12 14:57:24.494876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.494894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.964 [2024-07-12 14:57:24.494916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.964 [2024-07-12 14:57:24.494979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.964 [2024-07-12 14:57:24.494986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.964 [2024-07-12 14:57:24.494990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.494995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.964 [2024-07-12 14:57:24.495003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.495019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.964 [2024-07-12 14:57:24.495026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.495040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.964 [2024-07-12 14:57:24.495047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc9fc00) 00:17:45.964 [2024-07-12 14:57:24.495061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.964 [2024-07-12 14:57:24.495067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.964 [2024-07-12 14:57:24.495071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.495082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.965 [2024-07-12 14:57:24.495087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:45.965 [2024-07-12 14:57:24.495100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:45.965 [2024-07-12 14:57:24.495108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.495120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.965 [2024-07-12 14:57:24.495142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce29c0, cid 0, qid 0 00:17:45.965 [2024-07-12 14:57:24.495150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2b40, cid 1, qid 0 00:17:45.965 [2024-07-12 14:57:24.495155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2cc0, cid 2, qid 0 00:17:45.965 [2024-07-12 14:57:24.495160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.965 [2024-07-12 14:57:24.495165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2fc0, cid 4, qid 0 00:17:45.965 [2024-07-12 14:57:24.495260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.495267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.495271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2fc0) on tqpair=0xc9fc00 00:17:45.965 [2024-07-12 14:57:24.495282] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:45.965 [2024-07-12 14:57:24.495288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:45.965 [2024-07-12 14:57:24.495300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.495313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.965 [2024-07-12 14:57:24.495339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2fc0, cid 4, qid 0 00:17:45.965 [2024-07-12 14:57:24.495407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.965 [2024-07-12 14:57:24.495414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.965 [2024-07-12 14:57:24.495418] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495422] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9fc00): datao=0, datal=4096, cccid=4 00:17:45.965 [2024-07-12 14:57:24.495428] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xce2fc0) on tqpair(0xc9fc00): expected_datao=0, payload_size=4096 00:17:45.965 [2024-07-12 14:57:24.495433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495440] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495445] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.495460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.495464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2fc0) on tqpair=0xc9fc00 00:17:45.965 [2024-07-12 14:57:24.495482] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:45.965 [2024-07-12 14:57:24.495535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.495553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.965 [2024-07-12 14:57:24.495562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.495577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.965 [2024-07-12 14:57:24.495606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2fc0, cid 4, qid 0 00:17:45.965 [2024-07-12 14:57:24.495614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce3140, cid 5, qid 0 00:17:45.965 [2024-07-12 14:57:24.495718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.965 [2024-07-12 14:57:24.495725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.965 [2024-07-12 14:57:24.495730] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495734] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9fc00): datao=0, datal=1024, cccid=4 00:17:45.965 [2024-07-12 14:57:24.495739] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xce2fc0) on tqpair(0xc9fc00): expected_datao=0, payload_size=1024 00:17:45.965 [2024-07-12 14:57:24.495744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495751] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.495768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.495772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.495776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce3140) on tqpair=0xc9fc00 00:17:45.965 [2024-07-12 14:57:24.536661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.536707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.536714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.536721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2fc0) on tqpair=0xc9fc00 00:17:45.965 [2024-07-12 14:57:24.536749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.536755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.536770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.965 [2024-07-12 14:57:24.536815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2fc0, cid 4, qid 0 00:17:45.965 [2024-07-12 14:57:24.536953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.965 [2024-07-12 14:57:24.536961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.965 [2024-07-12 14:57:24.536965] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.536970] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9fc00): datao=0, datal=3072, cccid=4 00:17:45.965 [2024-07-12 14:57:24.536975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xce2fc0) on tqpair(0xc9fc00): expected_datao=0, payload_size=3072 00:17:45.965 [2024-07-12 14:57:24.536991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537001] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537007] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.537023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.537027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2fc0) on tqpair=0xc9fc00 00:17:45.965 [2024-07-12 14:57:24.537044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9fc00) 00:17:45.965 [2024-07-12 14:57:24.537058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.965 [2024-07-12 14:57:24.537087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2fc0, cid 4, qid 0 00:17:45.965 [2024-07-12 14:57:24.537165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.965 [2024-07-12 14:57:24.537173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.965 [2024-07-12 14:57:24.537177] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537181] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9fc00): datao=0, datal=8, cccid=4 00:17:45.965 [2024-07-12 14:57:24.537186] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xce2fc0) on tqpair(0xc9fc00): expected_datao=0, payload_size=8 00:17:45.965 [2024-07-12 14:57:24.537191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537198] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.537202] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.580598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.965 [2024-07-12 14:57:24.580645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.965 [2024-07-12 14:57:24.580652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.965 [2024-07-12 14:57:24.580659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2fc0) on tqpair=0xc9fc00 00:17:45.965 ===================================================== 00:17:45.965 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:45.965 ===================================================== 00:17:45.965 Controller Capabilities/Features 00:17:45.965 ================================ 00:17:45.965 Vendor ID: 0000 00:17:45.965 Subsystem Vendor ID: 0000 00:17:45.965 Serial Number: .................... 00:17:45.965 Model Number: ........................................ 00:17:45.965 Firmware Version: 24.09 00:17:45.965 Recommended Arb Burst: 0 00:17:45.965 IEEE OUI Identifier: 00 00 00 00:17:45.965 Multi-path I/O 00:17:45.965 May have multiple subsystem ports: No 00:17:45.965 May have multiple controllers: No 00:17:45.965 Associated with SR-IOV VF: No 00:17:45.965 Max Data Transfer Size: 131072 00:17:45.965 Max Number of Namespaces: 0 00:17:45.965 Max Number of I/O Queues: 1024 00:17:45.965 NVMe Specification Version (VS): 1.3 00:17:45.965 NVMe Specification Version (Identify): 1.3 00:17:45.965 Maximum Queue Entries: 128 00:17:45.965 Contiguous Queues Required: Yes 00:17:45.965 Arbitration Mechanisms Supported 00:17:45.965 Weighted Round Robin: Not Supported 00:17:45.965 Vendor Specific: Not Supported 00:17:45.965 Reset Timeout: 15000 ms 00:17:45.965 Doorbell Stride: 4 bytes 00:17:45.965 NVM Subsystem Reset: Not Supported 00:17:45.965 Command Sets Supported 00:17:45.965 NVM Command Set: Supported 00:17:45.965 Boot Partition: Not Supported 00:17:45.965 Memory Page Size Minimum: 4096 bytes 00:17:45.966 Memory Page Size Maximum: 4096 bytes 00:17:45.966 Persistent Memory Region: Not Supported 00:17:45.966 Optional Asynchronous Events Supported 00:17:45.966 Namespace Attribute Notices: Not Supported 00:17:45.966 Firmware Activation Notices: Not Supported 00:17:45.966 ANA Change Notices: Not Supported 00:17:45.966 PLE Aggregate Log Change Notices: Not Supported 00:17:45.966 LBA Status Info Alert Notices: Not Supported 00:17:45.966 EGE Aggregate Log Change Notices: Not Supported 00:17:45.966 Normal NVM Subsystem Shutdown event: Not Supported 00:17:45.966 Zone Descriptor Change Notices: Not Supported 00:17:45.966 Discovery Log Change Notices: Supported 00:17:45.966 Controller Attributes 00:17:45.966 128-bit Host Identifier: Not Supported 00:17:45.966 Non-Operational Permissive Mode: Not Supported 00:17:45.966 NVM Sets: Not Supported 00:17:45.966 Read Recovery Levels: Not Supported 00:17:45.966 Endurance Groups: Not Supported 00:17:45.966 Predictable Latency Mode: Not Supported 00:17:45.966 Traffic Based Keep ALive: Not Supported 00:17:45.966 Namespace Granularity: Not Supported 00:17:45.966 SQ Associations: Not Supported 00:17:45.966 UUID List: Not Supported 00:17:45.966 Multi-Domain Subsystem: Not Supported 00:17:45.966 Fixed Capacity Management: Not Supported 00:17:45.966 Variable Capacity Management: Not Supported 00:17:45.966 Delete Endurance Group: Not Supported 00:17:45.966 Delete NVM Set: Not Supported 00:17:45.966 Extended LBA Formats Supported: Not Supported 00:17:45.966 Flexible Data Placement Supported: Not Supported 00:17:45.966 00:17:45.966 Controller Memory Buffer Support 00:17:45.966 ================================ 00:17:45.966 Supported: No 00:17:45.966 00:17:45.966 Persistent Memory Region Support 00:17:45.966 ================================ 00:17:45.966 Supported: No 00:17:45.966 00:17:45.966 Admin Command Set Attributes 00:17:45.966 ============================ 00:17:45.966 Security Send/Receive: Not Supported 00:17:45.966 Format NVM: Not Supported 00:17:45.966 Firmware Activate/Download: Not Supported 00:17:45.966 Namespace Management: Not Supported 00:17:45.966 Device Self-Test: Not Supported 00:17:45.966 Directives: Not Supported 00:17:45.966 NVMe-MI: Not Supported 00:17:45.966 Virtualization Management: Not Supported 00:17:45.966 Doorbell Buffer Config: Not Supported 00:17:45.966 Get LBA Status Capability: Not Supported 00:17:45.966 Command & Feature Lockdown Capability: Not Supported 00:17:45.966 Abort Command Limit: 1 00:17:45.966 Async Event Request Limit: 4 00:17:45.966 Number of Firmware Slots: N/A 00:17:45.966 Firmware Slot 1 Read-Only: N/A 00:17:45.966 Firmware Activation Without Reset: N/A 00:17:45.966 Multiple Update Detection Support: N/A 00:17:45.966 Firmware Update Granularity: No Information Provided 00:17:45.966 Per-Namespace SMART Log: No 00:17:45.966 Asymmetric Namespace Access Log Page: Not Supported 00:17:45.966 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:45.966 Command Effects Log Page: Not Supported 00:17:45.966 Get Log Page Extended Data: Supported 00:17:45.966 Telemetry Log Pages: Not Supported 00:17:45.966 Persistent Event Log Pages: Not Supported 00:17:45.966 Supported Log Pages Log Page: May Support 00:17:45.966 Commands Supported & Effects Log Page: Not Supported 00:17:45.966 Feature Identifiers & Effects Log Page:May Support 00:17:45.966 NVMe-MI Commands & Effects Log Page: May Support 00:17:45.966 Data Area 4 for Telemetry Log: Not Supported 00:17:45.966 Error Log Page Entries Supported: 128 00:17:45.966 Keep Alive: Not Supported 00:17:45.966 00:17:45.966 NVM Command Set Attributes 00:17:45.966 ========================== 00:17:45.966 Submission Queue Entry Size 00:17:45.966 Max: 1 00:17:45.966 Min: 1 00:17:45.966 Completion Queue Entry Size 00:17:45.966 Max: 1 00:17:45.966 Min: 1 00:17:45.966 Number of Namespaces: 0 00:17:45.966 Compare Command: Not Supported 00:17:45.966 Write Uncorrectable Command: Not Supported 00:17:45.966 Dataset Management Command: Not Supported 00:17:45.966 Write Zeroes Command: Not Supported 00:17:45.966 Set Features Save Field: Not Supported 00:17:45.966 Reservations: Not Supported 00:17:45.966 Timestamp: Not Supported 00:17:45.966 Copy: Not Supported 00:17:45.966 Volatile Write Cache: Not Present 00:17:45.966 Atomic Write Unit (Normal): 1 00:17:45.966 Atomic Write Unit (PFail): 1 00:17:45.966 Atomic Compare & Write Unit: 1 00:17:45.966 Fused Compare & Write: Supported 00:17:45.966 Scatter-Gather List 00:17:45.966 SGL Command Set: Supported 00:17:45.966 SGL Keyed: Supported 00:17:45.966 SGL Bit Bucket Descriptor: Not Supported 00:17:45.966 SGL Metadata Pointer: Not Supported 00:17:45.966 Oversized SGL: Not Supported 00:17:45.966 SGL Metadata Address: Not Supported 00:17:45.966 SGL Offset: Supported 00:17:45.966 Transport SGL Data Block: Not Supported 00:17:45.966 Replay Protected Memory Block: Not Supported 00:17:45.966 00:17:45.966 Firmware Slot Information 00:17:45.966 ========================= 00:17:45.966 Active slot: 0 00:17:45.966 00:17:45.966 00:17:45.966 Error Log 00:17:45.966 ========= 00:17:45.966 00:17:45.966 Active Namespaces 00:17:45.966 ================= 00:17:45.966 Discovery Log Page 00:17:45.966 ================== 00:17:45.966 Generation Counter: 2 00:17:45.966 Number of Records: 2 00:17:45.966 Record Format: 0 00:17:45.966 00:17:45.966 Discovery Log Entry 0 00:17:45.966 ---------------------- 00:17:45.966 Transport Type: 3 (TCP) 00:17:45.966 Address Family: 1 (IPv4) 00:17:45.966 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:45.966 Entry Flags: 00:17:45.966 Duplicate Returned Information: 1 00:17:45.966 Explicit Persistent Connection Support for Discovery: 1 00:17:45.966 Transport Requirements: 00:17:45.966 Secure Channel: Not Required 00:17:45.966 Port ID: 0 (0x0000) 00:17:45.966 Controller ID: 65535 (0xffff) 00:17:45.966 Admin Max SQ Size: 128 00:17:45.966 Transport Service Identifier: 4420 00:17:45.966 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:45.966 Transport Address: 10.0.0.2 00:17:45.966 Discovery Log Entry 1 00:17:45.966 ---------------------- 00:17:45.966 Transport Type: 3 (TCP) 00:17:45.966 Address Family: 1 (IPv4) 00:17:45.966 Subsystem Type: 2 (NVM Subsystem) 00:17:45.966 Entry Flags: 00:17:45.966 Duplicate Returned Information: 0 00:17:45.966 Explicit Persistent Connection Support for Discovery: 0 00:17:45.966 Transport Requirements: 00:17:45.966 Secure Channel: Not Required 00:17:45.966 Port ID: 0 (0x0000) 00:17:45.966 Controller ID: 65535 (0xffff) 00:17:45.966 Admin Max SQ Size: 128 00:17:45.966 Transport Service Identifier: 4420 00:17:45.966 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:45.966 Transport Address: 10.0.0.2 [2024-07-12 14:57:24.580816] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:45.966 [2024-07-12 14:57:24.580834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce29c0) on tqpair=0xc9fc00 00:17:45.966 [2024-07-12 14:57:24.580844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.966 [2024-07-12 14:57:24.580851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2b40) on tqpair=0xc9fc00 00:17:45.966 [2024-07-12 14:57:24.580856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.966 [2024-07-12 14:57:24.580862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2cc0) on tqpair=0xc9fc00 00:17:45.966 [2024-07-12 14:57:24.580867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.966 [2024-07-12 14:57:24.580873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.966 [2024-07-12 14:57:24.580878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.966 [2024-07-12 14:57:24.580894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.580899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.580904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.966 [2024-07-12 14:57:24.580918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.966 [2024-07-12 14:57:24.580950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.966 [2024-07-12 14:57:24.581041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.966 [2024-07-12 14:57:24.581049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.966 [2024-07-12 14:57:24.581053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.581057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.966 [2024-07-12 14:57:24.581072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.581077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.581083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.966 [2024-07-12 14:57:24.581091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.966 [2024-07-12 14:57:24.581118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.966 [2024-07-12 14:57:24.581209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.966 [2024-07-12 14:57:24.581216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.966 [2024-07-12 14:57:24.581220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.966 [2024-07-12 14:57:24.581225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581231] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:45.967 [2024-07-12 14:57:24.581236] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:45.967 [2024-07-12 14:57:24.581247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.581884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.581901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.581920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.581978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.581985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.581990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.581994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.582094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.582101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.582106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.582212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.582219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.582223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.582328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.582335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.582339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.582452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.582460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.582464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.967 [2024-07-12 14:57:24.582580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.967 [2024-07-12 14:57:24.582587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.967 [2024-07-12 14:57:24.582591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.967 [2024-07-12 14:57:24.582607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.967 [2024-07-12 14:57:24.582617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.967 [2024-07-12 14:57:24.582625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.967 [2024-07-12 14:57:24.582645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.582701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.582708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.582712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.582728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.582746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.582764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.582817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.582825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.582829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.582844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.582860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.582879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.582933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.582940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.582944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.582960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.582969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.582976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.582995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.583895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.583902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.583907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.968 [2024-07-12 14:57:24.583922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.583931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.968 [2024-07-12 14:57:24.583939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.968 [2024-07-12 14:57:24.583957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.968 [2024-07-12 14:57:24.584016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.968 [2024-07-12 14:57:24.584023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.968 [2024-07-12 14:57:24.584027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.968 [2024-07-12 14:57:24.584031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.584042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.969 [2024-07-12 14:57:24.584059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.969 [2024-07-12 14:57:24.584077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.969 [2024-07-12 14:57:24.584130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.969 [2024-07-12 14:57:24.584138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.969 [2024-07-12 14:57:24.584142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.584157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.969 [2024-07-12 14:57:24.584174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.969 [2024-07-12 14:57:24.584192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.969 [2024-07-12 14:57:24.584261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.969 [2024-07-12 14:57:24.584275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.969 [2024-07-12 14:57:24.584282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.584307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.969 [2024-07-12 14:57:24.584333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.969 [2024-07-12 14:57:24.584365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.969 [2024-07-12 14:57:24.584421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.969 [2024-07-12 14:57:24.584428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.969 [2024-07-12 14:57:24.584432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.584448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.584458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.969 [2024-07-12 14:57:24.584465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.969 [2024-07-12 14:57:24.584485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.969 [2024-07-12 14:57:24.588582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.969 [2024-07-12 14:57:24.588637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.969 [2024-07-12 14:57:24.588648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.588657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.588686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.588693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.588697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9fc00) 00:17:45.969 [2024-07-12 14:57:24.588711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.969 [2024-07-12 14:57:24.588761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xce2e40, cid 3, qid 0 00:17:45.969 [2024-07-12 14:57:24.588858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.969 [2024-07-12 14:57:24.588866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.969 [2024-07-12 14:57:24.588870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.969 [2024-07-12 14:57:24.588874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xce2e40) on tqpair=0xc9fc00 00:17:45.969 [2024-07-12 14:57:24.588884] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:45.969 00:17:45.969 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:46.230 [2024-07-12 14:57:24.633476] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:46.230 [2024-07-12 14:57:24.633560] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86876 ] 00:17:46.230 [2024-07-12 14:57:24.774996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:46.230 [2024-07-12 14:57:24.775073] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:46.230 [2024-07-12 14:57:24.775081] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:46.230 [2024-07-12 14:57:24.775095] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:46.230 [2024-07-12 14:57:24.775102] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:46.230 [2024-07-12 14:57:24.775396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:46.230 [2024-07-12 14:57:24.775463] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xebdc00 0 00:17:46.230 [2024-07-12 14:57:24.787542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:46.230 [2024-07-12 14:57:24.787569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:46.230 [2024-07-12 14:57:24.787577] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:46.230 [2024-07-12 14:57:24.787581] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:46.230 [2024-07-12 14:57:24.787622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.787630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.787635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.230 [2024-07-12 14:57:24.787651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:46.230 [2024-07-12 14:57:24.787687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.230 [2024-07-12 14:57:24.795535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.230 [2024-07-12 14:57:24.795558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.230 [2024-07-12 14:57:24.795564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.230 [2024-07-12 14:57:24.795583] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:46.230 [2024-07-12 14:57:24.795593] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:46.230 [2024-07-12 14:57:24.795600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:46.230 [2024-07-12 14:57:24.795621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.230 [2024-07-12 14:57:24.795643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.230 [2024-07-12 14:57:24.795676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.230 [2024-07-12 14:57:24.795756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.230 [2024-07-12 14:57:24.795764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.230 [2024-07-12 14:57:24.795768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.230 [2024-07-12 14:57:24.795778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:46.230 [2024-07-12 14:57:24.795787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:46.230 [2024-07-12 14:57:24.795795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.230 [2024-07-12 14:57:24.795812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.230 [2024-07-12 14:57:24.795833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.230 [2024-07-12 14:57:24.795891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.230 [2024-07-12 14:57:24.795899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.230 [2024-07-12 14:57:24.795903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.230 [2024-07-12 14:57:24.795914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:46.230 [2024-07-12 14:57:24.795924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:46.230 [2024-07-12 14:57:24.795932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.795940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.230 [2024-07-12 14:57:24.795948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.230 [2024-07-12 14:57:24.795967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.230 [2024-07-12 14:57:24.796021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.230 [2024-07-12 14:57:24.796028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.230 [2024-07-12 14:57:24.796032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.796036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.230 [2024-07-12 14:57:24.796043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:46.230 [2024-07-12 14:57:24.796054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.796059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.796063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.230 [2024-07-12 14:57:24.796071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.230 [2024-07-12 14:57:24.796100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.230 [2024-07-12 14:57:24.796153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.230 [2024-07-12 14:57:24.796160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.230 [2024-07-12 14:57:24.796164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.230 [2024-07-12 14:57:24.796169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.230 [2024-07-12 14:57:24.796174] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:46.230 [2024-07-12 14:57:24.796180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:46.230 [2024-07-12 14:57:24.796188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:46.230 [2024-07-12 14:57:24.796296] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:46.231 [2024-07-12 14:57:24.796305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:46.231 [2024-07-12 14:57:24.796315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.796333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.231 [2024-07-12 14:57:24.796356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.231 [2024-07-12 14:57:24.796415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.796422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.796426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.796436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:46.231 [2024-07-12 14:57:24.796448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.796464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.231 [2024-07-12 14:57:24.796483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.231 [2024-07-12 14:57:24.796554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.796564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.796568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.796578] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:46.231 [2024-07-12 14:57:24.796584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.796593] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:46.231 [2024-07-12 14:57:24.796604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.796615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.796628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.231 [2024-07-12 14:57:24.796662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.231 [2024-07-12 14:57:24.796762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.231 [2024-07-12 14:57:24.796769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.231 [2024-07-12 14:57:24.796774] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796778] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=4096, cccid=0 00:17:46.231 [2024-07-12 14:57:24.796784] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf009c0) on tqpair(0xebdc00): expected_datao=0, payload_size=4096 00:17:46.231 [2024-07-12 14:57:24.796789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796798] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796803] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.796818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.796822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.796836] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:46.231 [2024-07-12 14:57:24.796842] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:46.231 [2024-07-12 14:57:24.796851] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:46.231 [2024-07-12 14:57:24.796857] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:46.231 [2024-07-12 14:57:24.796862] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:46.231 [2024-07-12 14:57:24.796867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.796877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.796886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.796895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.796904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:46.231 [2024-07-12 14:57:24.796926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.231 [2024-07-12 14:57:24.796990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.796997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.797001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.797014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.231 [2024-07-12 14:57:24.797037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.231 [2024-07-12 14:57:24.797058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.231 [2024-07-12 14:57:24.797079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.231 [2024-07-12 14:57:24.797099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.231 [2024-07-12 14:57:24.797154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf009c0, cid 0, qid 0 00:17:46.231 [2024-07-12 14:57:24.797161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00b40, cid 1, qid 0 00:17:46.231 [2024-07-12 14:57:24.797167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00cc0, cid 2, qid 0 00:17:46.231 [2024-07-12 14:57:24.797172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.231 [2024-07-12 14:57:24.797177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.231 [2024-07-12 14:57:24.797273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.797280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.797284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.797294] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:46.231 [2024-07-12 14:57:24.797300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:46.231 [2024-07-12 14:57:24.797359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.231 [2024-07-12 14:57:24.797415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.231 [2024-07-12 14:57:24.797423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.231 [2024-07-12 14:57:24.797427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.231 [2024-07-12 14:57:24.797502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:46.231 [2024-07-12 14:57:24.797539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.231 [2024-07-12 14:57:24.797544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.231 [2024-07-12 14:57:24.797552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.231 [2024-07-12 14:57:24.797576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.231 [2024-07-12 14:57:24.797651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.232 [2024-07-12 14:57:24.797659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.232 [2024-07-12 14:57:24.797663] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797667] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=4096, cccid=4 00:17:46.232 [2024-07-12 14:57:24.797672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf00fc0) on tqpair(0xebdc00): expected_datao=0, payload_size=4096 00:17:46.232 [2024-07-12 14:57:24.797677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797685] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797689] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.797704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.797708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.797724] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:46.232 [2024-07-12 14:57:24.797737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.797750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.797759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.797772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.797794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.232 [2024-07-12 14:57:24.797873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.232 [2024-07-12 14:57:24.797925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.232 [2024-07-12 14:57:24.797930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=4096, cccid=4 00:17:46.232 [2024-07-12 14:57:24.797940] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf00fc0) on tqpair(0xebdc00): expected_datao=0, payload_size=4096 00:17:46.232 [2024-07-12 14:57:24.797944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797952] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797956] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.797974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.797978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.797982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.232 [2024-07-12 14:57:24.798132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.232 [2024-07-12 14:57:24.798140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.232 [2024-07-12 14:57:24.798144] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798148] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=4096, cccid=4 00:17:46.232 [2024-07-12 14:57:24.798154] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf00fc0) on tqpair(0xebdc00): expected_datao=0, payload_size=4096 00:17:46.232 [2024-07-12 14:57:24.798159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798166] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798170] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798248] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:46.232 [2024-07-12 14:57:24.798253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:46.232 [2024-07-12 14:57:24.798259] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:46.232 [2024-07-12 14:57:24.798282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.232 [2024-07-12 14:57:24.798346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.232 [2024-07-12 14:57:24.798354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01140, cid 5, qid 0 00:17:46.232 [2024-07-12 14:57:24.798432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01140) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01140, cid 5, qid 0 00:17:46.232 [2024-07-12 14:57:24.798585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01140) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01140, cid 5, qid 0 00:17:46.232 [2024-07-12 14:57:24.798707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01140) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01140, cid 5, qid 0 00:17:46.232 [2024-07-12 14:57:24.798818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.232 [2024-07-12 14:57:24.798826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.232 [2024-07-12 14:57:24.798830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01140) on tqpair=0xebdc00 00:17:46.232 [2024-07-12 14:57:24.798854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.232 [2024-07-12 14:57:24.798901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xebdc00) 00:17:46.232 [2024-07-12 14:57:24.798908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.232 [2024-07-12 14:57:24.798916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.798921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xebdc00) 00:17:46.233 [2024-07-12 14:57:24.798927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.233 [2024-07-12 14:57:24.798949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01140, cid 5, qid 0 00:17:46.233 [2024-07-12 14:57:24.798957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00fc0, cid 4, qid 0 00:17:46.233 [2024-07-12 14:57:24.798962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf012c0, cid 6, qid 0 00:17:46.233 [2024-07-12 14:57:24.798967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01440, cid 7, qid 0 00:17:46.233 [2024-07-12 14:57:24.799108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.233 [2024-07-12 14:57:24.799116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.233 [2024-07-12 14:57:24.799120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799124] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=8192, cccid=5 00:17:46.233 [2024-07-12 14:57:24.799129] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf01140) on tqpair(0xebdc00): expected_datao=0, payload_size=8192 00:17:46.233 [2024-07-12 14:57:24.799134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799152] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799157] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.233 [2024-07-12 14:57:24.799169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.233 [2024-07-12 14:57:24.799173] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799177] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=512, cccid=4 00:17:46.233 [2024-07-12 14:57:24.799182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf00fc0) on tqpair(0xebdc00): expected_datao=0, payload_size=512 00:17:46.233 [2024-07-12 14:57:24.799186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799193] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799197] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.233 [2024-07-12 14:57:24.799209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.233 [2024-07-12 14:57:24.799213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799217] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=512, cccid=6 00:17:46.233 [2024-07-12 14:57:24.799221] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf012c0) on tqpair(0xebdc00): expected_datao=0, payload_size=512 00:17:46.233 [2024-07-12 14:57:24.799226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799233] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.233 [2024-07-12 14:57:24.799248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.233 [2024-07-12 14:57:24.799252] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799256] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebdc00): datao=0, datal=4096, cccid=7 00:17:46.233 [2024-07-12 14:57:24.799261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf01440) on tqpair(0xebdc00): expected_datao=0, payload_size=4096 00:17:46.233 [2024-07-12 14:57:24.799266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799273] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799277] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.233 ===================================================== 00:17:46.233 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.233 ===================================================== 00:17:46.233 Controller Capabilities/Features 00:17:46.233 ================================ 00:17:46.233 Vendor ID: 8086 00:17:46.233 Subsystem Vendor ID: 8086 00:17:46.233 Serial Number: SPDK00000000000001 00:17:46.233 Model Number: SPDK bdev Controller 00:17:46.233 Firmware Version: 24.09 00:17:46.233 Recommended Arb Burst: 6 00:17:46.233 IEEE OUI Identifier: e4 d2 5c 00:17:46.233 Multi-path I/O 00:17:46.233 May have multiple subsystem ports: Yes 00:17:46.233 May have multiple controllers: Yes 00:17:46.233 Associated with SR-IOV VF: No 00:17:46.233 Max Data Transfer Size: 131072 00:17:46.233 Max Number of Namespaces: 32 00:17:46.233 Max Number of I/O Queues: 127 00:17:46.233 NVMe Specification Version (VS): 1.3 00:17:46.233 NVMe Specification Version (Identify): 1.3 00:17:46.233 Maximum Queue Entries: 128 00:17:46.233 Contiguous Queues Required: Yes 00:17:46.233 Arbitration Mechanisms Supported 00:17:46.233 Weighted Round Robin: Not Supported 00:17:46.233 Vendor Specific: Not Supported 00:17:46.233 Reset Timeout: 15000 ms 00:17:46.233 Doorbell Stride: 4 bytes 00:17:46.233 NVM Subsystem Reset: Not Supported 00:17:46.233 Command Sets Supported 00:17:46.233 NVM Command Set: Supported 00:17:46.233 Boot Partition: Not Supported 00:17:46.233 Memory Page Size Minimum: 4096 bytes 00:17:46.233 Memory Page Size Maximum: 4096 bytes 00:17:46.233 Persistent Memory Region: Not Supported 00:17:46.233 Optional Asynchronous Events Supported 00:17:46.233 Namespace Attribute Notices: Supported 00:17:46.233 Firmware Activation Notices: Not Supported 00:17:46.233 ANA Change Notices: Not Supported 00:17:46.233 PLE Aggregate Log Change Notices: Not Supported 00:17:46.233 LBA Status Info Alert Notices: Not Supported 00:17:46.233 EGE Aggregate Log Change Notices: Not Supported 00:17:46.233 Normal NVM Subsystem Shutdown event: Not Supported 00:17:46.233 Zone Descriptor Change Notices: Not Supported 00:17:46.233 Discovery Log Change Notices: Not Supported 00:17:46.233 Controller Attributes 00:17:46.233 128-bit Host Identifier: Supported 00:17:46.233 Non-Operational Permissive Mode: Not Supported 00:17:46.233 NVM Sets: Not Supported 00:17:46.233 Read Recovery Levels: Not Supported 00:17:46.233 Endurance Groups: Not Supported 00:17:46.233 Predictable Latency Mode: Not Supported 00:17:46.233 Traffic Based Keep ALive: Not Supported 00:17:46.233 Namespace Granularity: Not Supported 00:17:46.233 SQ Associations: Not Supported 00:17:46.233 UUID List: Not Supported 00:17:46.233 Multi-Domain Subsystem: Not Supported 00:17:46.233 Fixed Capacity Management: Not Supported 00:17:46.233 Variable Capacity Management: Not Supported 00:17:46.233 Delete Endurance Group: Not Supported 00:17:46.233 Delete NVM Set: Not Supported 00:17:46.233 Extended LBA Formats Supported: Not Supported 00:17:46.233 Flexible Data Placement Supported: Not Supported 00:17:46.233 00:17:46.233 Controller Memory Buffer Support 00:17:46.233 ================================ 00:17:46.233 Supported: No 00:17:46.233 00:17:46.233 Persistent Memory Region Support 00:17:46.233 ================================ 00:17:46.233 Supported: No 00:17:46.233 00:17:46.233 Admin Command Set Attributes 00:17:46.233 ============================ 00:17:46.233 Security Send/Receive: Not Supported 00:17:46.233 Format NVM: Not Supported 00:17:46.233 Firmware Activate/Download: Not Supported 00:17:46.233 Namespace Management: Not Supported 00:17:46.233 Device Self-Test: Not Supported 00:17:46.233 Directives: Not Supported 00:17:46.233 NVMe-MI: Not Supported 00:17:46.233 Virtualization Management: Not Supported 00:17:46.233 Doorbell Buffer Config: Not Supported 00:17:46.233 Get LBA Status Capability: Not Supported 00:17:46.233 Command & Feature Lockdown Capability: Not Supported 00:17:46.233 Abort Command Limit: 4 00:17:46.233 Async Event Request Limit: 4 00:17:46.233 Number of Firmware Slots: N/A 00:17:46.233 Firmware Slot 1 Read-Only: N/A 00:17:46.233 Firmware Activation Without Reset: [2024-07-12 14:57:24.799292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.233 [2024-07-12 14:57:24.799296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01140) on tqpair=0xebdc00 00:17:46.233 [2024-07-12 14:57:24.799316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.233 [2024-07-12 14:57:24.799324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.233 [2024-07-12 14:57:24.799327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00fc0) on tqpair=0xebdc00 00:17:46.233 [2024-07-12 14:57:24.799346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.233 [2024-07-12 14:57:24.799353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.233 [2024-07-12 14:57:24.799357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf012c0) on tqpair=0xebdc00 00:17:46.233 [2024-07-12 14:57:24.799369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.233 [2024-07-12 14:57:24.799376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.233 [2024-07-12 14:57:24.799379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.233 [2024-07-12 14:57:24.799384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01440) on tqpair=0xebdc00 00:17:46.233 N/A 00:17:46.233 Multiple Update Detection Support: N/A 00:17:46.233 Firmware Update Granularity: No Information Provided 00:17:46.233 Per-Namespace SMART Log: No 00:17:46.233 Asymmetric Namespace Access Log Page: Not Supported 00:17:46.233 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:46.233 Command Effects Log Page: Supported 00:17:46.233 Get Log Page Extended Data: Supported 00:17:46.233 Telemetry Log Pages: Not Supported 00:17:46.233 Persistent Event Log Pages: Not Supported 00:17:46.233 Supported Log Pages Log Page: May Support 00:17:46.233 Commands Supported & Effects Log Page: Not Supported 00:17:46.233 Feature Identifiers & Effects Log Page:May Support 00:17:46.233 NVMe-MI Commands & Effects Log Page: May Support 00:17:46.233 Data Area 4 for Telemetry Log: Not Supported 00:17:46.233 Error Log Page Entries Supported: 128 00:17:46.233 Keep Alive: Supported 00:17:46.233 Keep Alive Granularity: 10000 ms 00:17:46.233 00:17:46.233 NVM Command Set Attributes 00:17:46.234 ========================== 00:17:46.234 Submission Queue Entry Size 00:17:46.234 Max: 64 00:17:46.234 Min: 64 00:17:46.234 Completion Queue Entry Size 00:17:46.234 Max: 16 00:17:46.234 Min: 16 00:17:46.234 Number of Namespaces: 32 00:17:46.234 Compare Command: Supported 00:17:46.234 Write Uncorrectable Command: Not Supported 00:17:46.234 Dataset Management Command: Supported 00:17:46.234 Write Zeroes Command: Supported 00:17:46.234 Set Features Save Field: Not Supported 00:17:46.234 Reservations: Supported 00:17:46.234 Timestamp: Not Supported 00:17:46.234 Copy: Supported 00:17:46.234 Volatile Write Cache: Present 00:17:46.234 Atomic Write Unit (Normal): 1 00:17:46.234 Atomic Write Unit (PFail): 1 00:17:46.234 Atomic Compare & Write Unit: 1 00:17:46.234 Fused Compare & Write: Supported 00:17:46.234 Scatter-Gather List 00:17:46.234 SGL Command Set: Supported 00:17:46.234 SGL Keyed: Supported 00:17:46.234 SGL Bit Bucket Descriptor: Not Supported 00:17:46.234 SGL Metadata Pointer: Not Supported 00:17:46.234 Oversized SGL: Not Supported 00:17:46.234 SGL Metadata Address: Not Supported 00:17:46.234 SGL Offset: Supported 00:17:46.234 Transport SGL Data Block: Not Supported 00:17:46.234 Replay Protected Memory Block: Not Supported 00:17:46.234 00:17:46.234 Firmware Slot Information 00:17:46.234 ========================= 00:17:46.234 Active slot: 1 00:17:46.234 Slot 1 Firmware Revision: 24.09 00:17:46.234 00:17:46.234 00:17:46.234 Commands Supported and Effects 00:17:46.234 ============================== 00:17:46.234 Admin Commands 00:17:46.234 -------------- 00:17:46.234 Get Log Page (02h): Supported 00:17:46.234 Identify (06h): Supported 00:17:46.234 Abort (08h): Supported 00:17:46.234 Set Features (09h): Supported 00:17:46.234 Get Features (0Ah): Supported 00:17:46.234 Asynchronous Event Request (0Ch): Supported 00:17:46.234 Keep Alive (18h): Supported 00:17:46.234 I/O Commands 00:17:46.234 ------------ 00:17:46.234 Flush (00h): Supported LBA-Change 00:17:46.234 Write (01h): Supported LBA-Change 00:17:46.234 Read (02h): Supported 00:17:46.234 Compare (05h): Supported 00:17:46.234 Write Zeroes (08h): Supported LBA-Change 00:17:46.234 Dataset Management (09h): Supported LBA-Change 00:17:46.234 Copy (19h): Supported LBA-Change 00:17:46.234 00:17:46.234 Error Log 00:17:46.234 ========= 00:17:46.234 00:17:46.234 Arbitration 00:17:46.234 =========== 00:17:46.234 Arbitration Burst: 1 00:17:46.234 00:17:46.234 Power Management 00:17:46.234 ================ 00:17:46.234 Number of Power States: 1 00:17:46.234 Current Power State: Power State #0 00:17:46.234 Power State #0: 00:17:46.234 Max Power: 0.00 W 00:17:46.234 Non-Operational State: Operational 00:17:46.234 Entry Latency: Not Reported 00:17:46.234 Exit Latency: Not Reported 00:17:46.234 Relative Read Throughput: 0 00:17:46.234 Relative Read Latency: 0 00:17:46.234 Relative Write Throughput: 0 00:17:46.234 Relative Write Latency: 0 00:17:46.234 Idle Power: Not Reported 00:17:46.234 Active Power: Not Reported 00:17:46.234 Non-Operational Permissive Mode: Not Supported 00:17:46.234 00:17:46.234 Health Information 00:17:46.234 ================== 00:17:46.234 Critical Warnings: 00:17:46.234 Available Spare Space: OK 00:17:46.234 Temperature: OK 00:17:46.234 Device Reliability: OK 00:17:46.234 Read Only: No 00:17:46.234 Volatile Memory Backup: OK 00:17:46.234 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:46.234 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:46.234 Available Spare: 0% 00:17:46.234 Available Spare Threshold: 0% 00:17:46.234 Life Percentage Used:[2024-07-12 14:57:24.799497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.799506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xebdc00) 00:17:46.234 [2024-07-12 14:57:24.803528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.234 [2024-07-12 14:57:24.803578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf01440, cid 7, qid 0 00:17:46.234 [2024-07-12 14:57:24.803654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.234 [2024-07-12 14:57:24.803662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.234 [2024-07-12 14:57:24.803667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf01440) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803719] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:46.234 [2024-07-12 14:57:24.803733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf009c0) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.234 [2024-07-12 14:57:24.803747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00b40) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.234 [2024-07-12 14:57:24.803758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00cc0) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.234 [2024-07-12 14:57:24.803769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.234 [2024-07-12 14:57:24.803785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.234 [2024-07-12 14:57:24.803803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.234 [2024-07-12 14:57:24.803829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.234 [2024-07-12 14:57:24.803883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.234 [2024-07-12 14:57:24.803891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.234 [2024-07-12 14:57:24.803895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.803908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.803917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.234 [2024-07-12 14:57:24.803925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.234 [2024-07-12 14:57:24.803948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.234 [2024-07-12 14:57:24.804032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.234 [2024-07-12 14:57:24.804040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.234 [2024-07-12 14:57:24.804044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.804048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.234 [2024-07-12 14:57:24.804054] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:46.234 [2024-07-12 14:57:24.804059] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:46.234 [2024-07-12 14:57:24.804070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.804075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.234 [2024-07-12 14:57:24.804080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.234 [2024-07-12 14:57:24.804087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.234 [2024-07-12 14:57:24.804106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.234 [2024-07-12 14:57:24.804166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.234 [2024-07-12 14:57:24.804173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.234 [2024-07-12 14:57:24.804177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.804878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.804933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.804940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.804944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.804960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.804969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.804977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.235 [2024-07-12 14:57:24.805598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.235 [2024-07-12 14:57:24.805656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.235 [2024-07-12 14:57:24.805663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.235 [2024-07-12 14:57:24.805667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.235 [2024-07-12 14:57:24.805683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.235 [2024-07-12 14:57:24.805692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.235 [2024-07-12 14:57:24.805700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.805718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.805769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.805777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.805781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.805796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.805813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.805833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.805886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.805894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.805898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.805914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.805923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.805931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.805949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.806896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.806915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.806971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.806978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.806982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.806987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.236 [2024-07-12 14:57:24.806998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.807003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.807007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.236 [2024-07-12 14:57:24.807015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.236 [2024-07-12 14:57:24.807034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.236 [2024-07-12 14:57:24.807087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.236 [2024-07-12 14:57:24.807095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.236 [2024-07-12 14:57:24.807099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.236 [2024-07-12 14:57:24.807103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.807115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.237 [2024-07-12 14:57:24.807132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.237 [2024-07-12 14:57:24.807150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.237 [2024-07-12 14:57:24.807208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.237 [2024-07-12 14:57:24.807216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.237 [2024-07-12 14:57:24.807220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.807235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.237 [2024-07-12 14:57:24.807253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.237 [2024-07-12 14:57:24.807271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.237 [2024-07-12 14:57:24.807324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.237 [2024-07-12 14:57:24.807332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.237 [2024-07-12 14:57:24.807336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.807351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.237 [2024-07-12 14:57:24.807369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.237 [2024-07-12 14:57:24.807387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.237 [2024-07-12 14:57:24.807441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.237 [2024-07-12 14:57:24.807448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.237 [2024-07-12 14:57:24.807452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.807467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.807477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.237 [2024-07-12 14:57:24.807484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.237 [2024-07-12 14:57:24.807503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.237 [2024-07-12 14:57:24.811537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.237 [2024-07-12 14:57:24.811560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.237 [2024-07-12 14:57:24.811566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.811571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.811587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.811594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.811598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebdc00) 00:17:46.237 [2024-07-12 14:57:24.811608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.237 [2024-07-12 14:57:24.811638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf00e40, cid 3, qid 0 00:17:46.237 [2024-07-12 14:57:24.811697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.237 [2024-07-12 14:57:24.811705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.237 [2024-07-12 14:57:24.811709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.237 [2024-07-12 14:57:24.811714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf00e40) on tqpair=0xebdc00 00:17:46.237 [2024-07-12 14:57:24.811723] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:46.237 0% 00:17:46.237 Data Units Read: 0 00:17:46.237 Data Units Written: 0 00:17:46.237 Host Read Commands: 0 00:17:46.237 Host Write Commands: 0 00:17:46.237 Controller Busy Time: 0 minutes 00:17:46.237 Power Cycles: 0 00:17:46.237 Power On Hours: 0 hours 00:17:46.237 Unsafe Shutdowns: 0 00:17:46.237 Unrecoverable Media Errors: 0 00:17:46.237 Lifetime Error Log Entries: 0 00:17:46.237 Warning Temperature Time: 0 minutes 00:17:46.237 Critical Temperature Time: 0 minutes 00:17:46.237 00:17:46.237 Number of Queues 00:17:46.237 ================ 00:17:46.237 Number of I/O Submission Queues: 127 00:17:46.237 Number of I/O Completion Queues: 127 00:17:46.237 00:17:46.237 Active Namespaces 00:17:46.237 ================= 00:17:46.237 Namespace ID:1 00:17:46.237 Error Recovery Timeout: Unlimited 00:17:46.237 Command Set Identifier: NVM (00h) 00:17:46.237 Deallocate: Supported 00:17:46.237 Deallocated/Unwritten Error: Not Supported 00:17:46.237 Deallocated Read Value: Unknown 00:17:46.237 Deallocate in Write Zeroes: Not Supported 00:17:46.237 Deallocated Guard Field: 0xFFFF 00:17:46.237 Flush: Supported 00:17:46.237 Reservation: Supported 00:17:46.237 Namespace Sharing Capabilities: Multiple Controllers 00:17:46.237 Size (in LBAs): 131072 (0GiB) 00:17:46.237 Capacity (in LBAs): 131072 (0GiB) 00:17:46.237 Utilization (in LBAs): 131072 (0GiB) 00:17:46.237 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:46.237 EUI64: ABCDEF0123456789 00:17:46.237 UUID: 16fba050-7267-4f80-a293-99093c444f97 00:17:46.237 Thin Provisioning: Not Supported 00:17:46.237 Per-NS Atomic Units: Yes 00:17:46.237 Atomic Boundary Size (Normal): 0 00:17:46.237 Atomic Boundary Size (PFail): 0 00:17:46.237 Atomic Boundary Offset: 0 00:17:46.237 Maximum Single Source Range Length: 65535 00:17:46.237 Maximum Copy Length: 65535 00:17:46.237 Maximum Source Range Count: 1 00:17:46.237 NGUID/EUI64 Never Reused: No 00:17:46.237 Namespace Write Protected: No 00:17:46.237 Number of LBA Formats: 1 00:17:46.237 Current LBA Format: LBA Format #00 00:17:46.237 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:46.237 00:17:46.237 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:46.237 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.237 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.237 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.496 rmmod nvme_tcp 00:17:46.496 rmmod nvme_fabrics 00:17:46.496 rmmod nvme_keyring 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86815 ']' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86815 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86815 ']' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86815 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86815 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:46.496 killing process with pid 86815 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86815' 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86815 00:17:46.496 14:57:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86815 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:46.755 ************************************ 00:17:46.755 END TEST nvmf_identify 00:17:46.755 ************************************ 00:17:46.755 00:17:46.755 real 0m2.548s 00:17:46.755 user 0m7.252s 00:17:46.755 sys 0m0.610s 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.755 14:57:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 14:57:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:46.755 14:57:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:46.755 14:57:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:46.755 14:57:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.755 14:57:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 ************************************ 00:17:46.755 START TEST nvmf_perf 00:17:46.755 ************************************ 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:46.755 * Looking for test storage... 00:17:46.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.755 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.756 Cannot find device "nvmf_tgt_br" 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.756 Cannot find device "nvmf_tgt_br2" 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.756 Cannot find device "nvmf_tgt_br" 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.756 Cannot find device "nvmf_tgt_br2" 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:46.756 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:47.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:47.014 00:17:47.014 --- 10.0.0.2 ping statistics --- 00:17:47.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.014 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:47.014 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:47.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:47.014 00:17:47.014 --- 10.0.0.3 ping statistics --- 00:17:47.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.014 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:47.015 00:17:47.015 --- 10.0.0.1 ping statistics --- 00:17:47.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.015 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.015 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87041 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87041 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87041 ']' 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.273 14:57:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:47.273 [2024-07-12 14:57:25.731811] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:17:47.273 [2024-07-12 14:57:25.732433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.273 [2024-07-12 14:57:25.872896] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.532 [2024-07-12 14:57:25.980961] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.532 [2024-07-12 14:57:25.981308] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.532 [2024-07-12 14:57:25.981479] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.532 [2024-07-12 14:57:25.981691] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.532 [2024-07-12 14:57:25.981854] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.532 [2024-07-12 14:57:25.982096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.532 [2024-07-12 14:57:25.982210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.532 [2024-07-12 14:57:25.982845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.532 [2024-07-12 14:57:25.982865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:47.532 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:48.100 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:48.100 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:48.359 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:48.359 14:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.617 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:48.617 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:48.617 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:48.617 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:48.617 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.883 [2024-07-12 14:57:27.528836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.142 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.401 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:49.401 14:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.660 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:49.660 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:49.917 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.176 [2024-07-12 14:57:28.674184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.176 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.434 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:50.434 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:50.434 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:50.434 14:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:51.809 Initializing NVMe Controllers 00:17:51.809 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:51.809 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:51.809 Initialization complete. Launching workers. 00:17:51.809 ======================================================== 00:17:51.809 Latency(us) 00:17:51.809 Device Information : IOPS MiB/s Average min max 00:17:51.809 PCIE (0000:00:10.0) NSID 1 from core 0: 23746.91 92.76 1351.21 309.63 5947.49 00:17:51.809 ======================================================== 00:17:51.809 Total : 23746.91 92.76 1351.21 309.63 5947.49 00:17:51.809 00:17:51.809 14:57:30 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:52.744 Initializing NVMe Controllers 00:17:52.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:52.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:52.744 Initialization complete. Launching workers. 00:17:52.744 ======================================================== 00:17:52.744 Latency(us) 00:17:52.744 Device Information : IOPS MiB/s Average min max 00:17:52.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3241.66 12.66 308.10 118.16 6110.78 00:17:52.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.76 0.48 8145.80 5980.12 12067.66 00:17:52.744 ======================================================== 00:17:52.744 Total : 3364.42 13.14 594.08 118.16 12067.66 00:17:52.744 00:17:53.002 14:57:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:54.384 Initializing NVMe Controllers 00:17:54.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:54.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:54.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:54.384 Initialization complete. Launching workers. 00:17:54.384 ======================================================== 00:17:54.384 Latency(us) 00:17:54.384 Device Information : IOPS MiB/s Average min max 00:17:54.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8088.90 31.60 3956.20 670.38 12139.43 00:17:54.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2629.99 10.27 12281.60 5041.96 24413.03 00:17:54.384 ======================================================== 00:17:54.384 Total : 10718.89 41.87 5998.93 670.38 24413.03 00:17:54.384 00:17:54.384 14:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:54.384 14:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:56.910 Initializing NVMe Controllers 00:17:56.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.910 Controller IO queue size 128, less than required. 00:17:56.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:56.910 Controller IO queue size 128, less than required. 00:17:56.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:56.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:56.910 Initialization complete. Launching workers. 00:17:56.910 ======================================================== 00:17:56.910 Latency(us) 00:17:56.910 Device Information : IOPS MiB/s Average min max 00:17:56.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1732.54 433.13 74528.63 43261.07 162961.18 00:17:56.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 383.07 95.77 357656.20 82669.36 883483.29 00:17:56.910 ======================================================== 00:17:56.910 Total : 2115.60 528.90 125793.70 43261.07 883483.29 00:17:56.910 00:17:56.910 14:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:57.168 Initializing NVMe Controllers 00:17:57.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.168 Controller IO queue size 128, less than required. 00:17:57.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:57.168 Controller IO queue size 128, less than required. 00:17:57.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:57.168 WARNING: Some requested NVMe devices were skipped 00:17:57.168 No valid NVMe controllers or AIO or URING devices found 00:17:57.168 14:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:59.697 Initializing NVMe Controllers 00:17:59.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.697 Controller IO queue size 128, less than required. 00:17:59.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.697 Controller IO queue size 128, less than required. 00:17:59.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:59.697 Initialization complete. Launching workers. 00:17:59.697 00:17:59.697 ==================== 00:17:59.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:59.697 TCP transport: 00:17:59.697 polls: 8673 00:17:59.697 idle_polls: 4268 00:17:59.697 sock_completions: 4405 00:17:59.697 nvme_completions: 4517 00:17:59.697 submitted_requests: 6752 00:17:59.697 queued_requests: 1 00:17:59.697 00:17:59.697 ==================== 00:17:59.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:59.697 TCP transport: 00:17:59.697 polls: 10720 00:17:59.697 idle_polls: 7378 00:17:59.697 sock_completions: 3342 00:17:59.697 nvme_completions: 6279 00:17:59.697 submitted_requests: 9408 00:17:59.697 queued_requests: 1 00:17:59.697 ======================================================== 00:17:59.697 Latency(us) 00:17:59.697 Device Information : IOPS MiB/s Average min max 00:17:59.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1127.64 281.91 116518.08 74460.90 171449.70 00:17:59.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1567.60 391.90 82427.75 22061.07 169088.69 00:17:59.697 ======================================================== 00:17:59.697 Total : 2695.24 673.81 96690.48 22061.07 171449.70 00:17:59.697 00:17:59.697 14:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:59.697 14:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:59.955 rmmod nvme_tcp 00:17:59.955 rmmod nvme_fabrics 00:17:59.955 rmmod nvme_keyring 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87041 ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87041 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87041 ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87041 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87041 00:17:59.955 killing process with pid 87041 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87041' 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87041 00:17:59.955 14:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87041 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.523 14:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:00.782 ************************************ 00:18:00.782 END TEST nvmf_perf 00:18:00.782 ************************************ 00:18:00.782 00:18:00.782 real 0m13.932s 00:18:00.782 user 0m51.563s 00:18:00.782 sys 0m3.497s 00:18:00.782 14:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.782 14:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:00.782 14:57:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:00.782 14:57:39 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:00.782 14:57:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:00.782 14:57:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.782 14:57:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:00.782 ************************************ 00:18:00.782 START TEST nvmf_fio_host 00:18:00.782 ************************************ 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:00.782 * Looking for test storage... 00:18:00.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:00.782 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:00.783 Cannot find device "nvmf_tgt_br" 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.783 Cannot find device "nvmf_tgt_br2" 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:00.783 Cannot find device "nvmf_tgt_br" 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:00.783 Cannot find device "nvmf_tgt_br2" 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:00.783 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:01.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:01.041 00:18:01.041 --- 10.0.0.2 ping statistics --- 00:18:01.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.041 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:01.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:01.041 00:18:01.041 --- 10.0.0.3 ping statistics --- 00:18:01.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.041 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:01.041 00:18:01.041 --- 10.0.0.1 ping statistics --- 00:18:01.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.041 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87513 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87513 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87513 ']' 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.041 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.042 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.042 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.042 14:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.301 [2024-07-12 14:57:39.732937] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:01.301 [2024-07-12 14:57:39.733085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.301 [2024-07-12 14:57:39.870619] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.301 [2024-07-12 14:57:39.931704] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.301 [2024-07-12 14:57:39.931958] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.301 [2024-07-12 14:57:39.932142] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.301 [2024-07-12 14:57:39.932200] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.301 [2024-07-12 14:57:39.932312] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.301 [2024-07-12 14:57:39.932462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.301 [2024-07-12 14:57:39.932566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.301 [2024-07-12 14:57:39.933145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.301 [2024-07-12 14:57:39.933190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.559 14:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.559 14:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:18:01.559 14:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.817 [2024-07-12 14:57:40.316341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.817 14:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:01.817 14:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.817 14:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.817 14:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:02.075 Malloc1 00:18:02.075 14:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.332 14:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.590 14:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.848 [2024-07-12 14:57:41.344736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.848 14:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:03.107 14:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.365 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:03.365 fio-3.35 00:18:03.365 Starting 1 thread 00:18:05.894 00:18:05.894 test: (groupid=0, jobs=1): err= 0: pid=87625: Fri Jul 12 14:57:44 2024 00:18:05.894 read: IOPS=8721, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2008msec) 00:18:05.894 slat (usec): min=2, max=312, avg= 2.87, stdev= 3.17 00:18:05.894 clat (usec): min=3186, max=17184, avg=7665.83, stdev=677.05 00:18:05.894 lat (usec): min=3229, max=17187, avg=7668.70, stdev=676.86 00:18:05.894 clat percentiles (usec): 00:18:05.894 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:18:05.894 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:18:05.894 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:18:05.894 | 99.00th=[ 9765], 99.50th=[10290], 99.90th=[13960], 99.95th=[14746], 00:18:05.894 | 99.99th=[17171] 00:18:05.894 bw ( KiB/s): min=33512, max=35672, per=100.00%, avg=34910.00, stdev=992.93, samples=4 00:18:05.894 iops : min= 8378, max= 8918, avg=8727.50, stdev=248.23, samples=4 00:18:05.894 write: IOPS=8721, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2008msec); 0 zone resets 00:18:05.894 slat (usec): min=2, max=480, avg= 3.08, stdev= 5.03 00:18:05.894 clat (usec): min=2312, max=17088, avg=6944.46, stdev=642.69 00:18:05.894 lat (usec): min=2346, max=17091, avg=6947.54, stdev=642.61 00:18:05.894 clat percentiles (usec): 00:18:05.894 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:18:05.894 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:18:05.894 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7767], 00:18:05.894 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[14091], 99.95th=[14615], 00:18:05.894 | 99.99th=[16909] 00:18:05.894 bw ( KiB/s): min=34392, max=35272, per=100.00%, avg=34886.00, stdev=378.42, samples=4 00:18:05.894 iops : min= 8598, max= 8818, avg=8721.50, stdev=94.61, samples=4 00:18:05.894 lat (msec) : 4=0.08%, 10=99.42%, 20=0.50% 00:18:05.894 cpu : usr=64.62%, sys=24.91%, ctx=47, majf=0, minf=7 00:18:05.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:05.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:05.894 issued rwts: total=17513,17512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:05.894 00:18:05.894 Run status group 0 (all jobs): 00:18:05.894 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2008-2008msec 00:18:05.894 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2008-2008msec 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:05.894 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:05.895 14:57:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:05.895 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:05.895 fio-3.35 00:18:05.895 Starting 1 thread 00:18:08.419 00:18:08.419 test: (groupid=0, jobs=1): err= 0: pid=87669: Fri Jul 12 14:57:46 2024 00:18:08.419 read: IOPS=7284, BW=114MiB/s (119MB/s)(229MiB/2011msec) 00:18:08.419 slat (usec): min=3, max=153, avg= 4.42, stdev= 2.24 00:18:08.419 clat (usec): min=2928, max=19077, avg=10334.95, stdev=2666.53 00:18:08.419 lat (usec): min=2932, max=19084, avg=10339.37, stdev=2666.89 00:18:08.419 clat percentiles (usec): 00:18:08.419 | 1.00th=[ 5407], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7898], 00:18:08.419 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10814], 00:18:08.419 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14222], 95.00th=[15008], 00:18:08.419 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:18:08.419 | 99.99th=[19006] 00:18:08.419 bw ( KiB/s): min=58634, max=62848, per=52.16%, avg=60794.50, stdev=1827.69, samples=4 00:18:08.419 iops : min= 3664, max= 3928, avg=3799.50, stdev=114.48, samples=4 00:18:08.419 write: IOPS=4304, BW=67.3MiB/s (70.5MB/s)(124MiB/1849msec); 0 zone resets 00:18:08.419 slat (usec): min=37, max=340, avg=42.74, stdev= 8.40 00:18:08.419 clat (usec): min=5920, max=22700, avg=12420.74, stdev=2370.83 00:18:08.419 lat (usec): min=5959, max=22752, avg=12463.48, stdev=2372.40 00:18:08.419 clat percentiles (usec): 00:18:08.419 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:18:08.419 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12125], 60.00th=[12780], 00:18:08.419 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15664], 95.00th=[16712], 00:18:08.419 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[21365], 00:18:08.419 | 99.99th=[22676] 00:18:08.419 bw ( KiB/s): min=60327, max=66016, per=92.00%, avg=63361.75, stdev=2338.15, samples=4 00:18:08.419 iops : min= 3770, max= 4126, avg=3960.00, stdev=146.32, samples=4 00:18:08.419 lat (msec) : 4=0.07%, 10=36.43%, 20=63.37%, 50=0.13% 00:18:08.419 cpu : usr=70.20%, sys=19.00%, ctx=2, majf=0, minf=18 00:18:08.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:08.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.419 issued rwts: total=14649,7959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.419 00:18:08.419 Run status group 0 (all jobs): 00:18:08.419 READ: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=229MiB (240MB), run=2011-2011msec 00:18:08.419 WRITE: bw=67.3MiB/s (70.5MB/s), 67.3MiB/s-67.3MiB/s (70.5MB/s-70.5MB/s), io=124MiB (130MB), run=1849-1849msec 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.419 14:57:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:08.419 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.419 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:08.419 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.419 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.419 rmmod nvme_tcp 00:18:08.419 rmmod nvme_fabrics 00:18:08.419 rmmod nvme_keyring 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87513 ']' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87513 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87513 ']' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87513 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87513 00:18:08.678 killing process with pid 87513 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87513' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87513 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87513 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:08.678 ************************************ 00:18:08.678 END TEST nvmf_fio_host 00:18:08.678 ************************************ 00:18:08.678 00:18:08.678 real 0m8.094s 00:18:08.678 user 0m33.374s 00:18:08.678 sys 0m2.221s 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.678 14:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.937 14:57:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.937 14:57:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:08.937 14:57:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.937 14:57:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.937 14:57:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.937 ************************************ 00:18:08.937 START TEST nvmf_failover 00:18:08.937 ************************************ 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:08.937 * Looking for test storage... 00:18:08.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.937 14:57:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:08.938 Cannot find device "nvmf_tgt_br" 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.938 Cannot find device "nvmf_tgt_br2" 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:08.938 Cannot find device "nvmf_tgt_br" 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:08.938 Cannot find device "nvmf_tgt_br2" 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:08.938 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:09.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:09.197 00:18:09.197 --- 10.0.0.2 ping statistics --- 00:18:09.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.197 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:09.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:09.197 00:18:09.197 --- 10.0.0.3 ping statistics --- 00:18:09.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.197 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:09.197 00:18:09.197 --- 10.0.0.1 ping statistics --- 00:18:09.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.197 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:09.197 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:09.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87887 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87887 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87887 ']' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.198 14:57:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:09.456 [2024-07-12 14:57:47.895463] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:09.456 [2024-07-12 14:57:47.895588] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.456 [2024-07-12 14:57:48.034784] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.456 [2024-07-12 14:57:48.107945] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.456 [2024-07-12 14:57:48.108457] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.456 [2024-07-12 14:57:48.108758] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.456 [2024-07-12 14:57:48.109052] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.456 [2024-07-12 14:57:48.109268] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.456 [2024-07-12 14:57:48.109650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.456 [2024-07-12 14:57:48.109741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.714 [2024-07-12 14:57:48.109747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.649 14:57:48 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:10.649 [2024-07-12 14:57:49.255288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.649 14:57:49 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:10.907 Malloc0 00:18:11.165 14:57:49 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.423 14:57:49 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.681 14:57:50 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.938 [2024-07-12 14:57:50.409112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.938 14:57:50 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:12.216 [2024-07-12 14:57:50.737461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:12.216 14:57:50 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:12.474 [2024-07-12 14:57:51.045749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:12.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88004 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88004 /var/tmp/bdevperf.sock 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88004 ']' 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.474 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.040 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.040 14:57:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:13.040 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.298 NVMe0n1 00:18:13.298 14:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.556 00:18:13.556 14:57:52 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88038 00:18:13.556 14:57:52 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:13.556 14:57:52 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.491 14:57:53 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.750 [2024-07-12 14:57:53.387234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.750 [2024-07-12 14:57:53.387586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:14.751 [2024-07-12 14:57:53.387771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ff0 is same with the state(5) to be set 00:18:15.008 14:57:53 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:18.294 14:57:56 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.294 00:18:18.294 14:57:56 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:18.554 [2024-07-12 14:57:57.026476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.026994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.027002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.027011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.554 [2024-07-12 14:57:57.027019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 [2024-07-12 14:57:57.027586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb72f10 is same with the state(5) to be set 00:18:18.555 14:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:21.841 14:58:00 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.841 [2024-07-12 14:58:00.341099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.841 14:58:00 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:22.784 14:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:23.043 [2024-07-12 14:58:01.638674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.638999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.639007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.639016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.043 [2024-07-12 14:58:01.639024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 [2024-07-12 14:58:01.639222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb73ea0 is same with the state(5) to be set 00:18:23.044 14:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88038 00:18:29.612 0 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 88004 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88004 ']' 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88004 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88004 00:18:29.612 killing process with pid 88004 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88004' 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88004 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88004 00:18:29.612 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:29.612 [2024-07-12 14:57:51.125955] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:29.612 [2024-07-12 14:57:51.126090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88004 ] 00:18:29.612 [2024-07-12 14:57:51.260852] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.612 [2024-07-12 14:57:51.322836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.612 Running I/O for 15 seconds... 00:18:29.612 [2024-07-12 14:57:53.388314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.388981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.388996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.612 [2024-07-12 14:57:53.389341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.612 [2024-07-12 14:57:53.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.389984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.389998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.613 [2024-07-12 14:57:53.390468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.613 [2024-07-12 14:57:53.390484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.614 [2024-07-12 14:57:53.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.614 [2024-07-12 14:57:53.390541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.614 [2024-07-12 14:57:53.390572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.614 [2024-07-12 14:57:53.390602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.614 [2024-07-12 14:57:53.390633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.390978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.390994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.614 [2024-07-12 14:57:53.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.614 [2024-07-12 14:57:53.391835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.615 [2024-07-12 14:57:53.391849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.391865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.615 [2024-07-12 14:57:53.391882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.391898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.615 [2024-07-12 14:57:53.391913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.391929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.615 [2024-07-12 14:57:53.391943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.391966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.391981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:53.392435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3c40 is same with the state(5) to be set 00:18:29.615 [2024-07-12 14:57:53.392470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.615 [2024-07-12 14:57:53.392481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.615 [2024-07-12 14:57:53.392492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:18:29.615 [2024-07-12 14:57:53.392505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392574] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b3c40 was disconnected and freed. reset controller. 00:18:29.615 [2024-07-12 14:57:53.392601] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:29.615 [2024-07-12 14:57:53.392667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.615 [2024-07-12 14:57:53.392688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.615 [2024-07-12 14:57:53.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.615 [2024-07-12 14:57:53.392751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.615 [2024-07-12 14:57:53.392780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:53.392794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.615 [2024-07-12 14:57:53.392836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2044fd0 (9): Bad file descriptor 00:18:29.615 [2024-07-12 14:57:53.396866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.615 [2024-07-12 14:57:53.436803] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:29.615 [2024-07-12 14:57:57.028342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.615 [2024-07-12 14:57:57.028870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.615 [2024-07-12 14:57:57.028885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.028901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.028915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.028931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.028947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.028964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.028978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.028994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.616 [2024-07-12 14:57:57.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.616 [2024-07-12 14:57:57.029708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.029969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.029986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.030000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.030030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.030061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.030103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.617 [2024-07-12 14:57:57.030133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.617 [2024-07-12 14:57:57.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.617 [2024-07-12 14:57:57.030717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.030980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.031977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.031993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.032007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.618 [2024-07-12 14:57:57.032037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.618 [2024-07-12 14:57:57.032059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:57:57.032074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:57:57.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:57:57.032135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:57:57.032165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.619 [2024-07-12 14:57:57.032639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.619 [2024-07-12 14:57:57.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:18:29.619 [2024-07-12 14:57:57.032663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2260340 was disconnected and freed. reset controller. 00:18:29.619 [2024-07-12 14:57:57.032736] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:29.619 [2024-07-12 14:57:57.032793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.619 [2024-07-12 14:57:57.032815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.032831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.619 [2024-07-12 14:57:57.032846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.044132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.619 [2024-07-12 14:57:57.044169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.044186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.619 [2024-07-12 14:57:57.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:57:57.044215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.619 [2024-07-12 14:57:57.044271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2044fd0 (9): Bad file descriptor 00:18:29.619 [2024-07-12 14:57:57.048247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.619 [2024-07-12 14:57:57.086182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:29.619 [2024-07-12 14:58:01.641361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.619 [2024-07-12 14:58:01.641947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.619 [2024-07-12 14:58:01.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.641977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.641992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.620 [2024-07-12 14:58:01.642786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.642969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.642985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.620 [2024-07-12 14:58:01.643180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.620 [2024-07-12 14:58:01.643194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.621 [2024-07-12 14:58:01.643505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.643975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.643991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.621 [2024-07-12 14:58:01.644347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.621 [2024-07-12 14:58:01.644362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.622 [2024-07-12 14:58:01.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.622 [2024-07-12 14:58:01.644430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23624 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23632 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23640 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23656 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23664 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23672 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23688 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23696 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.644974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.644984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.644995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23704 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23720 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23728 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23736 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23752 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23760 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23768 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23784 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23792 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23800 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.622 [2024-07-12 14:58:01.645633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.622 [2024-07-12 14:58:01.645643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.622 [2024-07-12 14:58:01.645654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:8 PRP1 0x0 PRP2 0x0 00:18:29.622 [2024-07-12 14:58:01.645667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23816 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23824 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23832 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23848 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.645952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23856 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.645966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.645980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.645990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23864 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.646037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.646085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.646133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.646181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.623 [2024-07-12 14:58:01.646230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.623 [2024-07-12 14:58:01.646241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:18:29.623 [2024-07-12 14:58:01.646254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646303] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2260000 was disconnected and freed. reset controller. 00:18:29.623 [2024-07-12 14:58:01.646321] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:29.623 [2024-07-12 14:58:01.646380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.623 [2024-07-12 14:58:01.646410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.623 [2024-07-12 14:58:01.646441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.623 [2024-07-12 14:58:01.646471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.623 [2024-07-12 14:58:01.646499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.623 [2024-07-12 14:58:01.646525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.623 [2024-07-12 14:58:01.646579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2044fd0 (9): Bad file descriptor 00:18:29.623 [2024-07-12 14:58:01.650498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.623 [2024-07-12 14:58:01.686155] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:29.623 00:18:29.623 Latency(us) 00:18:29.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:29.623 Verification LBA range: start 0x0 length 0x4000 00:18:29.623 NVMe0n1 : 15.00 8731.67 34.11 224.80 0.00 14258.38 599.51 34078.72 00:18:29.623 =================================================================================================================== 00:18:29.623 Total : 8731.67 34.11 224.80 0.00 14258.38 599.51 34078.72 00:18:29.623 Received shutdown signal, test time was about 15.000000 seconds 00:18:29.623 00:18:29.623 Latency(us) 00:18:29.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.623 =================================================================================================================== 00:18:29.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:29.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88241 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88241 /var/tmp/bdevperf.sock 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88241 ']' 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.623 14:58:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:29.881 14:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.881 14:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:29.881 14:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:30.138 [2024-07-12 14:58:08.728887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:30.138 14:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:30.703 [2024-07-12 14:58:09.089241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:30.703 14:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.961 NVMe0n1 00:18:30.961 14:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:31.219 00:18:31.219 14:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:31.477 00:18:31.477 14:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:31.477 14:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:31.734 14:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:31.992 14:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:35.269 14:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:35.269 14:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:35.269 14:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88378 00:18:35.269 14:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.269 14:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88378 00:18:36.641 0 00:18:36.641 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:36.641 [2024-07-12 14:58:07.527668] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:36.641 [2024-07-12 14:58:07.527911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88241 ] 00:18:36.641 [2024-07-12 14:58:07.676940] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.641 [2024-07-12 14:58:07.747449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.641 [2024-07-12 14:58:10.541150] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:36.641 [2024-07-12 14:58:10.541275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.641 [2024-07-12 14:58:10.541299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.641 [2024-07-12 14:58:10.541317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.641 [2024-07-12 14:58:10.541331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.641 [2024-07-12 14:58:10.541345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.641 [2024-07-12 14:58:10.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.641 [2024-07-12 14:58:10.541373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.641 [2024-07-12 14:58:10.541386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.641 [2024-07-12 14:58:10.541400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.641 [2024-07-12 14:58:10.541441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.641 [2024-07-12 14:58:10.541471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e5fd0 (9): Bad file descriptor 00:18:36.641 [2024-07-12 14:58:10.550239] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:36.641 Running I/O for 1 seconds... 00:18:36.641 00:18:36.641 Latency(us) 00:18:36.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.642 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:36.642 Verification LBA range: start 0x0 length 0x4000 00:18:36.642 NVMe0n1 : 1.01 8836.63 34.52 0.00 0.00 14388.84 2323.55 15490.33 00:18:36.642 =================================================================================================================== 00:18:36.642 Total : 8836.63 34.52 0.00 0.00 14388.84 2323.55 15490.33 00:18:36.642 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.642 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:36.899 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.156 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.156 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:37.413 14:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.671 14:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88241 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88241 ']' 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88241 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88241 00:18:40.957 killing process with pid 88241 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88241' 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88241 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88241 00:18:40.957 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:41.215 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.473 rmmod nvme_tcp 00:18:41.473 rmmod nvme_fabrics 00:18:41.473 rmmod nvme_keyring 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87887 ']' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87887 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87887 ']' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87887 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87887 00:18:41.473 killing process with pid 87887 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87887' 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87887 00:18:41.473 14:58:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87887 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:41.732 00:18:41.732 real 0m32.809s 00:18:41.732 user 2m8.398s 00:18:41.732 sys 0m4.443s 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.732 14:58:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:41.732 ************************************ 00:18:41.732 END TEST nvmf_failover 00:18:41.732 ************************************ 00:18:41.732 14:58:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.732 14:58:20 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:41.732 14:58:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.732 14:58:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.732 14:58:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.732 ************************************ 00:18:41.732 START TEST nvmf_host_discovery 00:18:41.732 ************************************ 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:41.732 * Looking for test storage... 00:18:41.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.732 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.733 Cannot find device "nvmf_tgt_br" 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.733 Cannot find device "nvmf_tgt_br2" 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.733 Cannot find device "nvmf_tgt_br" 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:41.733 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.991 Cannot find device "nvmf_tgt_br2" 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.991 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:41.992 00:18:41.992 --- 10.0.0.2 ping statistics --- 00:18:41.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.992 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:41.992 00:18:41.992 --- 10.0.0.3 ping statistics --- 00:18:41.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.992 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:41.992 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:18:41.992 00:18:41.992 --- 10.0.0.1 ping statistics --- 00:18:41.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.992 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:18:42.250 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.250 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:42.250 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.250 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88694 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88694 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88694 ']' 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.251 14:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.251 [2024-07-12 14:58:20.749923] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:42.251 [2024-07-12 14:58:20.750211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.251 [2024-07-12 14:58:20.886810] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.509 [2024-07-12 14:58:20.946385] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.509 [2024-07-12 14:58:20.946644] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.509 [2024-07-12 14:58:20.946835] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.509 [2024-07-12 14:58:20.946892] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.509 [2024-07-12 14:58:20.946923] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.509 [2024-07-12 14:58:20.947051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 [2024-07-12 14:58:21.080164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 [2024-07-12 14:58:21.088281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 null0 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 null1 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.509 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:42.509 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88725 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88725 /tmp/host.sock 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88725 ']' 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.510 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 [2024-07-12 14:58:21.179472] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:42.769 [2024-07-12 14:58:21.179785] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88725 ] 00:18:42.769 [2024-07-12 14:58:21.319341] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.769 [2024-07-12 14:58:21.379918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:43.028 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 [2024-07-12 14:58:21.880505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:43.287 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:43.546 14:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:43.546 14:58:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:44.112 [2024-07-12 14:58:22.483192] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:44.112 [2024-07-12 14:58:22.483232] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:44.112 [2024-07-12 14:58:22.483254] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:44.112 [2024-07-12 14:58:22.569333] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:44.113 [2024-07-12 14:58:22.626947] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:44.113 [2024-07-12 14:58:22.627017] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.681 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.682 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:44.941 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.942 [2024-07-12 14:58:23.465126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:44.942 [2024-07-12 14:58:23.466157] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:44.942 [2024-07-12 14:58:23.466200] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.942 [2024-07-12 14:58:23.552222] bdev_nvme.c:6915:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.942 14:58:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:45.201 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.201 [2024-07-12 14:58:23.616716] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:45.201 [2024-07-12 14:58:23.616753] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:45.201 [2024-07-12 14:58:23.616761] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:45.201 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:45.201 14:58:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:46.137 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.138 [2024-07-12 14:58:24.730567] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:46.138 [2024-07-12 14:58:24.730613] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:46.138 [2024-07-12 14:58:24.734098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.138 [2024-07-12 14:58:24.734141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.138 [2024-07-12 14:58:24.734156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.138 [2024-07-12 14:58:24.734165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.138 [2024-07-12 14:58:24.734176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.138 [2024-07-12 14:58:24.734185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.138 [2024-07-12 14:58:24.734195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.138 [2024-07-12 14:58:24.734204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.138 [2024-07-12 14:58:24.734214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.138 [2024-07-12 14:58:24.744049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.138 [2024-07-12 14:58:24.754072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.138 [2024-07-12 14:58:24.754210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.138 [2024-07-12 14:58:24.754235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.138 [2024-07-12 14:58:24.754247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.138 [2024-07-12 14:58:24.754265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.138 [2024-07-12 14:58:24.754280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.138 [2024-07-12 14:58:24.754289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.138 [2024-07-12 14:58:24.754300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.138 [2024-07-12 14:58:24.754316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.138 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.138 [2024-07-12 14:58:24.764155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.138 [2024-07-12 14:58:24.764338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.138 [2024-07-12 14:58:24.764382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.138 [2024-07-12 14:58:24.764401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.138 [2024-07-12 14:58:24.764431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.138 [2024-07-12 14:58:24.764456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.138 [2024-07-12 14:58:24.764470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.138 [2024-07-12 14:58:24.764485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.138 [2024-07-12 14:58:24.764509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.138 [2024-07-12 14:58:24.774256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.138 [2024-07-12 14:58:24.774377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.138 [2024-07-12 14:58:24.774402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.138 [2024-07-12 14:58:24.774414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.138 [2024-07-12 14:58:24.774432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.138 [2024-07-12 14:58:24.774447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.138 [2024-07-12 14:58:24.774456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.138 [2024-07-12 14:58:24.774466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.138 [2024-07-12 14:58:24.774482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.138 [2024-07-12 14:58:24.784331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.138 [2024-07-12 14:58:24.784439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.138 [2024-07-12 14:58:24.784462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.138 [2024-07-12 14:58:24.784473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.138 [2024-07-12 14:58:24.784490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.138 [2024-07-12 14:58:24.784505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.138 [2024-07-12 14:58:24.784526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.138 [2024-07-12 14:58:24.784539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.138 [2024-07-12 14:58:24.784555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.398 [2024-07-12 14:58:24.794404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.398 [2024-07-12 14:58:24.794541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.398 [2024-07-12 14:58:24.794567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.398 [2024-07-12 14:58:24.794580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.398 [2024-07-12 14:58:24.794600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.398 [2024-07-12 14:58:24.794626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.398 [2024-07-12 14:58:24.794636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.398 [2024-07-12 14:58:24.794646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.398 [2024-07-12 14:58:24.794663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:46.398 [2024-07-12 14:58:24.805052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.398 [2024-07-12 14:58:24.805156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.398 [2024-07-12 14:58:24.805181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.398 [2024-07-12 14:58:24.805193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.398 [2024-07-12 14:58:24.805209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.398 [2024-07-12 14:58:24.805224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.398 [2024-07-12 14:58:24.805233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.398 [2024-07-12 14:58:24.805244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.398 [2024-07-12 14:58:24.805260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.398 [2024-07-12 14:58:24.815112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.398 [2024-07-12 14:58:24.815206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.398 [2024-07-12 14:58:24.815228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8df0 with addr=10.0.0.2, port=4420 00:18:46.398 [2024-07-12 14:58:24.815239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8df0 is same with the state(5) to be set 00:18:46.398 [2024-07-12 14:58:24.815255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8df0 (9): Bad file descriptor 00:18:46.398 [2024-07-12 14:58:24.815270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:46.398 [2024-07-12 14:58:24.815279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:46.398 [2024-07-12 14:58:24.815288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:46.398 [2024-07-12 14:58:24.815303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.398 [2024-07-12 14:58:24.818079] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:46.398 [2024-07-12 14:58:24.818112] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.398 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.398 14:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.398 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.398 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.398 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.657 14:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.593 [2024-07-12 14:58:26.183826] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:47.593 [2024-07-12 14:58:26.183873] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:47.593 [2024-07-12 14:58:26.183897] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:47.852 [2024-07-12 14:58:26.269978] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:47.852 [2024-07-12 14:58:26.330608] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:47.852 [2024-07-12 14:58:26.330676] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 2024/07/12 14:58:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:47.852 request: 00:18:47.852 { 00:18:47.852 "method": "bdev_nvme_start_discovery", 00:18:47.852 "params": { 00:18:47.852 "name": "nvme", 00:18:47.852 "trtype": "tcp", 00:18:47.852 "traddr": "10.0.0.2", 00:18:47.852 "adrfam": "ipv4", 00:18:47.852 "trsvcid": "8009", 00:18:47.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:47.852 "wait_for_attach": true 00:18:47.852 } 00:18:47.852 } 00:18:47.852 Got JSON-RPC error response 00:18:47.852 GoRPCClient: error on JSON-RPC call 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 2024/07/12 14:58:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:47.852 request: 00:18:47.852 { 00:18:47.852 "method": "bdev_nvme_start_discovery", 00:18:47.852 "params": { 00:18:47.852 "name": "nvme_second", 00:18:47.852 "trtype": "tcp", 00:18:47.852 "traddr": "10.0.0.2", 00:18:47.852 "adrfam": "ipv4", 00:18:47.852 "trsvcid": "8009", 00:18:47.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:47.852 "wait_for_attach": true 00:18:47.852 } 00:18:47.852 } 00:18:47.852 Got JSON-RPC error response 00:18:47.852 GoRPCClient: error on JSON-RPC call 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:47.852 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:48.110 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.111 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:48.111 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.111 14:58:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.044 [2024-07-12 14:58:27.627557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.044 [2024-07-12 14:58:27.627640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ffc50 with addr=10.0.0.2, port=8010 00:18:49.044 [2024-07-12 14:58:27.627664] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:49.044 [2024-07-12 14:58:27.627675] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:49.044 [2024-07-12 14:58:27.627685] bdev_nvme.c:7053:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:49.989 [2024-07-12 14:58:28.627530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.989 [2024-07-12 14:58:28.627603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ffc50 with addr=10.0.0.2, port=8010 00:18:49.989 [2024-07-12 14:58:28.627626] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:49.989 [2024-07-12 14:58:28.627638] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:49.989 [2024-07-12 14:58:28.627648] bdev_nvme.c:7053:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:51.362 [2024-07-12 14:58:29.627357] bdev_nvme.c:7034:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:51.362 2024/07/12 14:58:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:51.362 request: 00:18:51.362 { 00:18:51.362 "method": "bdev_nvme_start_discovery", 00:18:51.362 "params": { 00:18:51.362 "name": "nvme_second", 00:18:51.362 "trtype": "tcp", 00:18:51.362 "traddr": "10.0.0.2", 00:18:51.362 "adrfam": "ipv4", 00:18:51.362 "trsvcid": "8010", 00:18:51.362 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:51.362 "wait_for_attach": false, 00:18:51.362 "attach_timeout_ms": 3000 00:18:51.362 } 00:18:51.362 } 00:18:51.362 Got JSON-RPC error response 00:18:51.362 GoRPCClient: error on JSON-RPC call 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88725 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.362 rmmod nvme_tcp 00:18:51.362 rmmod nvme_fabrics 00:18:51.362 rmmod nvme_keyring 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88694 ']' 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88694 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88694 ']' 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88694 00:18:51.362 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88694 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:51.363 killing process with pid 88694 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88694' 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88694 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88694 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.363 14:58:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.363 14:58:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:51.363 00:18:51.363 real 0m9.775s 00:18:51.363 user 0m19.718s 00:18:51.363 sys 0m1.455s 00:18:51.363 14:58:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.363 14:58:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 ************************************ 00:18:51.363 END TEST nvmf_host_discovery 00:18:51.363 ************************************ 00:18:51.621 14:58:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:51.621 14:58:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:51.621 14:58:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:51.621 14:58:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.621 14:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:51.621 ************************************ 00:18:51.621 START TEST nvmf_host_multipath_status 00:18:51.621 ************************************ 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:51.621 * Looking for test storage... 00:18:51.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.621 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:51.622 Cannot find device "nvmf_tgt_br" 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.622 Cannot find device "nvmf_tgt_br2" 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:51.622 Cannot find device "nvmf_tgt_br" 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:51.622 Cannot find device "nvmf_tgt_br2" 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:51.622 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:51.880 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:51.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:51.880 00:18:51.881 --- 10.0.0.2 ping statistics --- 00:18:51.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.881 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:51.881 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:51.881 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:51.881 00:18:51.881 --- 10.0.0.3 ping statistics --- 00:18:51.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.881 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:51.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:51.881 00:18:51.881 --- 10.0.0.1 ping statistics --- 00:18:51.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.881 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89193 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89193 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89193 ']' 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.881 14:58:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:52.139 [2024-07-12 14:58:30.561621] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:18:52.139 [2024-07-12 14:58:30.561747] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.139 [2024-07-12 14:58:30.709617] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:52.139 [2024-07-12 14:58:30.778848] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.139 [2024-07-12 14:58:30.778917] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.139 [2024-07-12 14:58:30.778933] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.139 [2024-07-12 14:58:30.778949] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.139 [2024-07-12 14:58:30.778959] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.139 [2024-07-12 14:58:30.779070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.139 [2024-07-12 14:58:30.779087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89193 00:18:53.073 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.331 [2024-07-12 14:58:31.930264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.331 14:58:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:53.589 Malloc0 00:18:53.589 14:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:53.846 14:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.104 14:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.363 [2024-07-12 14:58:33.005847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.620 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:54.877 [2024-07-12 14:58:33.302060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89291 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89291 /var/tmp/bdevperf.sock 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89291 ']' 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.877 14:58:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:55.841 14:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.841 14:58:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:55.841 14:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:56.098 14:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:56.663 Nvme0n1 00:18:56.663 14:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:56.921 Nvme0n1 00:18:56.921 14:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:56.921 14:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.451 14:58:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:59.451 14:58:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:59.451 14:58:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:59.451 14:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:00.385 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:00.385 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:00.385 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.385 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:00.946 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.946 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:00.946 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.946 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.203 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.203 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.203 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.203 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.460 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.460 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:01.460 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.460 14:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:01.717 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.717 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:01.717 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.717 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.975 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.975 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:01.975 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.975 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.233 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.233 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:02.233 14:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:02.491 14:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:02.749 14:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:03.681 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:03.681 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:03.682 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.682 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.939 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.939 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:03.939 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.939 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.196 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.196 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.196 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.197 14:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.457 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.457 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.458 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.458 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.718 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.718 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.718 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.718 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.975 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.975 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:04.975 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.975 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.232 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.232 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:05.232 14:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:05.490 14:58:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:05.747 14:58:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:06.677 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:06.677 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:06.677 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.677 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.240 14:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.496 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.496 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.496 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.496 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.060 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.060 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:08.060 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.060 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.316 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.316 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:08.316 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.316 14:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.574 14:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.574 14:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:08.574 14:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:08.831 14:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:09.130 14:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:10.066 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:10.066 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:10.066 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.066 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.325 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.325 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:10.325 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.325 14:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.583 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.583 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.583 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.583 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:11.149 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.149 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:11.149 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.149 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.407 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.407 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:11.407 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.407 14:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.665 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.665 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:11.665 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.665 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.923 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.923 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:11.923 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:12.181 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:12.439 14:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:13.374 14:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:13.374 14:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:13.374 14:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.374 14:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.938 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.502 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.502 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:14.502 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.502 14:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.502 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.502 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:14.502 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.502 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.759 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.759 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:14.759 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.759 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.326 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.326 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:15.326 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:15.326 14:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:15.592 14:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:16.609 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:16.609 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:16.609 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.609 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.867 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.867 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:16.867 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.867 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.432 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.432 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.432 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.432 14:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.690 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.690 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.690 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.690 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.948 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.948 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:17.948 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.948 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.205 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.205 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.205 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.205 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.463 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.463 14:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:18.720 14:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:18.720 14:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:18.976 14:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:19.234 14:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:20.606 14:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:20.607 14:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:20.607 14:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.607 14:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.607 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.607 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.607 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.607 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.864 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.864 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.864 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.864 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:21.122 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.122 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:21.122 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:21.122 14:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.453 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.453 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:21.453 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.453 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:21.712 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.712 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:21.712 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.712 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.970 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.970 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:21.970 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:22.228 14:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:22.486 14:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:23.422 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:23.422 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:23.422 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.422 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:23.989 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.989 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:23.989 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.989 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.248 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.248 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.248 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.248 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:24.505 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.505 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:24.505 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.505 14:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:24.762 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.762 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:24.762 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.762 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.018 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.018 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:25.018 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.018 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.275 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.275 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:25.275 14:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:25.533 14:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:25.791 14:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:26.724 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:26.724 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:26.724 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:26.725 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.983 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.983 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:26.983 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.983 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:27.548 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.548 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:27.548 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.548 14:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:27.806 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.806 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:27.806 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.806 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.063 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.063 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.063 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.063 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.320 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.320 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:28.320 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.320 14:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:28.577 14:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.577 14:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:28.578 14:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:28.835 14:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:29.094 14:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:30.025 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:30.025 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:30.025 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.025 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:30.281 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.281 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:30.281 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.281 14:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:30.538 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:30.538 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:30.538 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.538 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:31.102 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.102 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:31.103 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.103 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:31.361 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.361 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:31.361 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.361 14:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.620 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.620 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:31.620 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.620 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89291 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89291 ']' 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89291 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89291 00:19:31.879 killing process with pid 89291 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89291' 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89291 00:19:31.879 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89291 00:19:31.879 Connection closed with partial response: 00:19:31.879 00:19:31.879 00:19:32.140 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89291 00:19:32.140 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:32.140 [2024-07-12 14:58:33.390850] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:19:32.140 [2024-07-12 14:58:33.390991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89291 ] 00:19:32.140 [2024-07-12 14:58:33.532343] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.140 [2024-07-12 14:58:33.609373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.140 Running I/O for 90 seconds... 00:19:32.140 [2024-07-12 14:58:50.651429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.651509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.651974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.651998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.140 [2024-07-12 14:58:50.652283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:32.140 [2024-07-12 14:58:50.652800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.140 [2024-07-12 14:58:50.652815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.652837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.654913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.654944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.654991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.655957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.655985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:58:50.656944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:58:50.656973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.141 [2024-07-12 14:59:07.582561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.141 [2024-07-12 14:59:07.582630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.582707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.582745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.582963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.582984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.582997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.583018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.583057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.583080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.583094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.583115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.583129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.584658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.584714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.584728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.142 [2024-07-12 14:59:07.585591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.142 [2024-07-12 14:59:07.585805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:32.142 [2024-07-12 14:59:07.585825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.585839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.585860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.585874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.585895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.585909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.585929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.585943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.585964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.585978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.586022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.586057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.143 [2024-07-12 14:59:07.586092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:32.143 [2024-07-12 14:59:07.586322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.143 [2024-07-12 14:59:07.586337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:32.143 Received shutdown signal, test time was about 34.794609 seconds 00:19:32.143 00:19:32.143 Latency(us) 00:19:32.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.143 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.143 Verification LBA range: start 0x0 length 0x4000 00:19:32.143 Nvme0n1 : 34.79 8227.92 32.14 0.00 0.00 15526.51 148.95 4026531.84 00:19:32.143 =================================================================================================================== 00:19:32.143 Total : 8227.92 32.14 0.00 0.00 15526.51 148.95 4026531.84 00:19:32.143 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.401 rmmod nvme_tcp 00:19:32.401 rmmod nvme_fabrics 00:19:32.401 rmmod nvme_keyring 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89193 ']' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89193 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89193 ']' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89193 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89193 00:19:32.401 killing process with pid 89193 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89193' 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89193 00:19:32.401 14:59:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89193 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:32.710 00:19:32.710 real 0m41.146s 00:19:32.710 user 2m15.680s 00:19:32.710 sys 0m9.686s 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.710 14:59:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:32.710 ************************************ 00:19:32.710 END TEST nvmf_host_multipath_status 00:19:32.710 ************************************ 00:19:32.710 14:59:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:32.710 14:59:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:32.710 14:59:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.710 14:59:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.710 14:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.710 ************************************ 00:19:32.710 START TEST nvmf_discovery_remove_ifc 00:19:32.710 ************************************ 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:32.710 * Looking for test storage... 00:19:32.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.710 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.711 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.711 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.711 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:32.711 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:32.969 Cannot find device "nvmf_tgt_br" 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.969 Cannot find device "nvmf_tgt_br2" 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:32.969 Cannot find device "nvmf_tgt_br" 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:32.969 Cannot find device "nvmf_tgt_br2" 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:32.969 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:33.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:33.228 00:19:33.228 --- 10.0.0.2 ping statistics --- 00:19:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.228 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:33.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:33.228 00:19:33.228 --- 10.0.0.3 ping statistics --- 00:19:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.228 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:33.228 00:19:33.228 --- 10.0.0.1 ping statistics --- 00:19:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.228 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:33.228 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90603 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90603 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90603 ']' 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.229 14:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.229 [2024-07-12 14:59:11.737385] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:19:33.229 [2024-07-12 14:59:11.737490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.229 [2024-07-12 14:59:11.871559] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.487 [2024-07-12 14:59:11.930059] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.487 [2024-07-12 14:59:11.930114] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.487 [2024-07-12 14:59:11.930125] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.487 [2024-07-12 14:59:11.930134] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.487 [2024-07-12 14:59:11.930141] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.487 [2024-07-12 14:59:11.930170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.487 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.487 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:33.487 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 [2024-07-12 14:59:12.062352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.488 [2024-07-12 14:59:12.070468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:33.488 null0 00:19:33.488 [2024-07-12 14:59:12.102433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90646 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90646 /tmp/host.sock 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90646 ']' 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.488 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.488 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.747 [2024-07-12 14:59:12.185497] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:19:33.747 [2024-07-12 14:59:12.185632] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90646 ] 00:19:33.747 [2024-07-12 14:59:12.330428] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.006 [2024-07-12 14:59:12.403336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.006 14:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.939 [2024-07-12 14:59:13.535422] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:34.939 [2024-07-12 14:59:13.535462] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:34.939 [2024-07-12 14:59:13.535483] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:35.199 [2024-07-12 14:59:13.623587] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:35.199 [2024-07-12 14:59:13.686877] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:35.199 [2024-07-12 14:59:13.686964] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:35.199 [2024-07-12 14:59:13.686998] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:35.199 [2024-07-12 14:59:13.687019] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:35.199 [2024-07-12 14:59:13.687048] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.199 [2024-07-12 14:59:13.693961] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x54d910 was disconnected and freed. delete nvme_qpair. 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:35.199 14:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.571 14:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:37.505 14:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.450 14:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.450 14:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:38.450 14:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:39.383 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:39.641 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.641 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:39.641 14:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.575 [2024-07-12 14:59:19.114848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:40.575 [2024-07-12 14:59:19.114927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.575 [2024-07-12 14:59:19.114943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.575 [2024-07-12 14:59:19.114957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.575 [2024-07-12 14:59:19.114967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.575 [2024-07-12 14:59:19.114978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.575 [2024-07-12 14:59:19.114988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.575 [2024-07-12 14:59:19.114998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.575 [2024-07-12 14:59:19.115007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.575 [2024-07-12 14:59:19.115018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.575 [2024-07-12 14:59:19.115027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.575 [2024-07-12 14:59:19.115036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5299c0 is same with the state(5) to be set 00:19:40.575 [2024-07-12 14:59:19.124841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5299c0 (9): Bad file descriptor 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:40.575 14:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.575 [2024-07-12 14:59:19.134873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.508 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.508 [2024-07-12 14:59:20.149656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:41.508 [2024-07-12 14:59:20.149750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5299c0 with addr=10.0.0.2, port=4420 00:19:41.508 [2024-07-12 14:59:20.149780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5299c0 is same with the state(5) to be set 00:19:41.508 [2024-07-12 14:59:20.149838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5299c0 (9): Bad file descriptor 00:19:41.508 [2024-07-12 14:59:20.150565] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:41.508 [2024-07-12 14:59:20.150628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:41.508 [2024-07-12 14:59:20.150648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:41.508 [2024-07-12 14:59:20.150666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:41.508 [2024-07-12 14:59:20.150703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.508 [2024-07-12 14:59:20.150723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:41.766 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.766 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:41.766 14:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:42.698 [2024-07-12 14:59:21.150782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.698 [2024-07-12 14:59:21.150855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.698 [2024-07-12 14:59:21.150868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.698 [2024-07-12 14:59:21.150878] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:42.698 [2024-07-12 14:59:21.150905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.698 [2024-07-12 14:59:21.150937] bdev_nvme.c:6742:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:42.698 [2024-07-12 14:59:21.151007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.698 [2024-07-12 14:59:21.151023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.698 [2024-07-12 14:59:21.151037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.698 [2024-07-12 14:59:21.151047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.698 [2024-07-12 14:59:21.151058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.698 [2024-07-12 14:59:21.151067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.698 [2024-07-12 14:59:21.151077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.698 [2024-07-12 14:59:21.151087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.698 [2024-07-12 14:59:21.151097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.698 [2024-07-12 14:59:21.151107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.698 [2024-07-12 14:59:21.151116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:42.698 [2024-07-12 14:59:21.151135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4b9540 (9): Bad file descriptor 00:19:42.698 [2024-07-12 14:59:21.151924] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:42.698 [2024-07-12 14:59:21.151954] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:42.698 14:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:44.071 14:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:44.637 [2024-07-12 14:59:23.157168] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:44.637 [2024-07-12 14:59:23.157208] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:44.637 [2024-07-12 14:59:23.157230] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:44.637 [2024-07-12 14:59:23.243316] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:44.896 [2024-07-12 14:59:23.299576] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:44.896 [2024-07-12 14:59:23.299645] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:44.896 [2024-07-12 14:59:23.299672] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:44.896 [2024-07-12 14:59:23.299692] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:44.896 [2024-07-12 14:59:23.299703] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:44.896 [2024-07-12 14:59:23.305831] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x532ed0 was disconnected and freed. delete nvme_qpair. 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90646 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90646 ']' 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90646 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90646 00:19:44.896 killing process with pid 90646 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90646' 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90646 00:19:44.896 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90646 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.154 rmmod nvme_tcp 00:19:45.154 rmmod nvme_fabrics 00:19:45.154 rmmod nvme_keyring 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90603 ']' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90603 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90603 ']' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90603 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90603 00:19:45.154 killing process with pid 90603 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90603' 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90603 00:19:45.154 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90603 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.412 00:19:45.412 real 0m12.701s 00:19:45.412 user 0m23.000s 00:19:45.412 sys 0m1.490s 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.412 14:59:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.412 ************************************ 00:19:45.412 END TEST nvmf_discovery_remove_ifc 00:19:45.412 ************************************ 00:19:45.412 14:59:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.412 14:59:23 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:45.412 14:59:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.412 14:59:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.412 14:59:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.412 ************************************ 00:19:45.412 START TEST nvmf_identify_kernel_target 00:19:45.412 ************************************ 00:19:45.412 14:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:45.412 * Looking for test storage... 00:19:45.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.671 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:45.672 Cannot find device "nvmf_tgt_br" 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.672 Cannot find device "nvmf_tgt_br2" 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:45.672 Cannot find device "nvmf_tgt_br" 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:45.672 Cannot find device "nvmf_tgt_br2" 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:45.672 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:45.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:45.931 00:19:45.931 --- 10.0.0.2 ping statistics --- 00:19:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.931 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:45.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:19:45.931 00:19:45.931 --- 10.0.0.3 ping statistics --- 00:19:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.931 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:45.931 00:19:45.931 --- 10.0.0.1 ping statistics --- 00:19:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.931 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:45.931 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:46.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.190 Waiting for block devices as requested 00:19:46.190 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:46.449 No valid GPT data, bailing 00:19:46.449 14:59:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:46.449 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:46.449 No valid GPT data, bailing 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:46.708 No valid GPT data, bailing 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:46.708 No valid GPT data, bailing 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.1 -t tcp -s 4420 00:19:46.708 00:19:46.708 Discovery Log Number of Records 2, Generation counter 2 00:19:46.708 =====Discovery Log Entry 0====== 00:19:46.708 trtype: tcp 00:19:46.708 adrfam: ipv4 00:19:46.708 subtype: current discovery subsystem 00:19:46.708 treq: not specified, sq flow control disable supported 00:19:46.708 portid: 1 00:19:46.708 trsvcid: 4420 00:19:46.708 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:46.708 traddr: 10.0.0.1 00:19:46.708 eflags: none 00:19:46.708 sectype: none 00:19:46.708 =====Discovery Log Entry 1====== 00:19:46.708 trtype: tcp 00:19:46.708 adrfam: ipv4 00:19:46.708 subtype: nvme subsystem 00:19:46.708 treq: not specified, sq flow control disable supported 00:19:46.708 portid: 1 00:19:46.708 trsvcid: 4420 00:19:46.708 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:46.708 traddr: 10.0.0.1 00:19:46.708 eflags: none 00:19:46.708 sectype: none 00:19:46.708 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:46.708 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:46.967 ===================================================== 00:19:46.967 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:46.967 ===================================================== 00:19:46.967 Controller Capabilities/Features 00:19:46.967 ================================ 00:19:46.967 Vendor ID: 0000 00:19:46.967 Subsystem Vendor ID: 0000 00:19:46.967 Serial Number: 843d2ef2355426d4f56f 00:19:46.967 Model Number: Linux 00:19:46.967 Firmware Version: 6.7.0-68 00:19:46.967 Recommended Arb Burst: 0 00:19:46.967 IEEE OUI Identifier: 00 00 00 00:19:46.967 Multi-path I/O 00:19:46.967 May have multiple subsystem ports: No 00:19:46.967 May have multiple controllers: No 00:19:46.967 Associated with SR-IOV VF: No 00:19:46.967 Max Data Transfer Size: Unlimited 00:19:46.967 Max Number of Namespaces: 0 00:19:46.967 Max Number of I/O Queues: 1024 00:19:46.967 NVMe Specification Version (VS): 1.3 00:19:46.967 NVMe Specification Version (Identify): 1.3 00:19:46.967 Maximum Queue Entries: 1024 00:19:46.967 Contiguous Queues Required: No 00:19:46.967 Arbitration Mechanisms Supported 00:19:46.967 Weighted Round Robin: Not Supported 00:19:46.967 Vendor Specific: Not Supported 00:19:46.967 Reset Timeout: 7500 ms 00:19:46.967 Doorbell Stride: 4 bytes 00:19:46.967 NVM Subsystem Reset: Not Supported 00:19:46.967 Command Sets Supported 00:19:46.967 NVM Command Set: Supported 00:19:46.967 Boot Partition: Not Supported 00:19:46.967 Memory Page Size Minimum: 4096 bytes 00:19:46.967 Memory Page Size Maximum: 4096 bytes 00:19:46.967 Persistent Memory Region: Not Supported 00:19:46.967 Optional Asynchronous Events Supported 00:19:46.967 Namespace Attribute Notices: Not Supported 00:19:46.967 Firmware Activation Notices: Not Supported 00:19:46.967 ANA Change Notices: Not Supported 00:19:46.967 PLE Aggregate Log Change Notices: Not Supported 00:19:46.967 LBA Status Info Alert Notices: Not Supported 00:19:46.967 EGE Aggregate Log Change Notices: Not Supported 00:19:46.967 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.967 Zone Descriptor Change Notices: Not Supported 00:19:46.967 Discovery Log Change Notices: Supported 00:19:46.967 Controller Attributes 00:19:46.967 128-bit Host Identifier: Not Supported 00:19:46.967 Non-Operational Permissive Mode: Not Supported 00:19:46.967 NVM Sets: Not Supported 00:19:46.967 Read Recovery Levels: Not Supported 00:19:46.967 Endurance Groups: Not Supported 00:19:46.967 Predictable Latency Mode: Not Supported 00:19:46.967 Traffic Based Keep ALive: Not Supported 00:19:46.967 Namespace Granularity: Not Supported 00:19:46.967 SQ Associations: Not Supported 00:19:46.967 UUID List: Not Supported 00:19:46.967 Multi-Domain Subsystem: Not Supported 00:19:46.967 Fixed Capacity Management: Not Supported 00:19:46.967 Variable Capacity Management: Not Supported 00:19:46.967 Delete Endurance Group: Not Supported 00:19:46.967 Delete NVM Set: Not Supported 00:19:46.967 Extended LBA Formats Supported: Not Supported 00:19:46.967 Flexible Data Placement Supported: Not Supported 00:19:46.967 00:19:46.967 Controller Memory Buffer Support 00:19:46.967 ================================ 00:19:46.967 Supported: No 00:19:46.967 00:19:46.967 Persistent Memory Region Support 00:19:46.968 ================================ 00:19:46.968 Supported: No 00:19:46.968 00:19:46.968 Admin Command Set Attributes 00:19:46.968 ============================ 00:19:46.968 Security Send/Receive: Not Supported 00:19:46.968 Format NVM: Not Supported 00:19:46.968 Firmware Activate/Download: Not Supported 00:19:46.968 Namespace Management: Not Supported 00:19:46.968 Device Self-Test: Not Supported 00:19:46.968 Directives: Not Supported 00:19:46.968 NVMe-MI: Not Supported 00:19:46.968 Virtualization Management: Not Supported 00:19:46.968 Doorbell Buffer Config: Not Supported 00:19:46.968 Get LBA Status Capability: Not Supported 00:19:46.968 Command & Feature Lockdown Capability: Not Supported 00:19:46.968 Abort Command Limit: 1 00:19:46.968 Async Event Request Limit: 1 00:19:46.968 Number of Firmware Slots: N/A 00:19:46.968 Firmware Slot 1 Read-Only: N/A 00:19:46.968 Firmware Activation Without Reset: N/A 00:19:46.968 Multiple Update Detection Support: N/A 00:19:46.968 Firmware Update Granularity: No Information Provided 00:19:46.968 Per-Namespace SMART Log: No 00:19:46.968 Asymmetric Namespace Access Log Page: Not Supported 00:19:46.968 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:46.968 Command Effects Log Page: Not Supported 00:19:46.968 Get Log Page Extended Data: Supported 00:19:46.968 Telemetry Log Pages: Not Supported 00:19:46.968 Persistent Event Log Pages: Not Supported 00:19:46.968 Supported Log Pages Log Page: May Support 00:19:46.968 Commands Supported & Effects Log Page: Not Supported 00:19:46.968 Feature Identifiers & Effects Log Page:May Support 00:19:46.968 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.968 Data Area 4 for Telemetry Log: Not Supported 00:19:46.968 Error Log Page Entries Supported: 1 00:19:46.968 Keep Alive: Not Supported 00:19:46.968 00:19:46.968 NVM Command Set Attributes 00:19:46.968 ========================== 00:19:46.968 Submission Queue Entry Size 00:19:46.968 Max: 1 00:19:46.968 Min: 1 00:19:46.968 Completion Queue Entry Size 00:19:46.968 Max: 1 00:19:46.968 Min: 1 00:19:46.968 Number of Namespaces: 0 00:19:46.968 Compare Command: Not Supported 00:19:46.968 Write Uncorrectable Command: Not Supported 00:19:46.968 Dataset Management Command: Not Supported 00:19:46.968 Write Zeroes Command: Not Supported 00:19:46.968 Set Features Save Field: Not Supported 00:19:46.968 Reservations: Not Supported 00:19:46.968 Timestamp: Not Supported 00:19:46.968 Copy: Not Supported 00:19:46.968 Volatile Write Cache: Not Present 00:19:46.968 Atomic Write Unit (Normal): 1 00:19:46.968 Atomic Write Unit (PFail): 1 00:19:46.968 Atomic Compare & Write Unit: 1 00:19:46.968 Fused Compare & Write: Not Supported 00:19:46.968 Scatter-Gather List 00:19:46.968 SGL Command Set: Supported 00:19:46.968 SGL Keyed: Not Supported 00:19:46.968 SGL Bit Bucket Descriptor: Not Supported 00:19:46.968 SGL Metadata Pointer: Not Supported 00:19:46.968 Oversized SGL: Not Supported 00:19:46.968 SGL Metadata Address: Not Supported 00:19:46.968 SGL Offset: Supported 00:19:46.968 Transport SGL Data Block: Not Supported 00:19:46.968 Replay Protected Memory Block: Not Supported 00:19:46.968 00:19:46.968 Firmware Slot Information 00:19:46.968 ========================= 00:19:46.968 Active slot: 0 00:19:46.968 00:19:46.968 00:19:46.968 Error Log 00:19:46.968 ========= 00:19:46.968 00:19:46.968 Active Namespaces 00:19:46.968 ================= 00:19:46.968 Discovery Log Page 00:19:46.968 ================== 00:19:46.968 Generation Counter: 2 00:19:46.968 Number of Records: 2 00:19:46.968 Record Format: 0 00:19:46.968 00:19:46.968 Discovery Log Entry 0 00:19:46.968 ---------------------- 00:19:46.968 Transport Type: 3 (TCP) 00:19:46.968 Address Family: 1 (IPv4) 00:19:46.968 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:46.968 Entry Flags: 00:19:46.968 Duplicate Returned Information: 0 00:19:46.968 Explicit Persistent Connection Support for Discovery: 0 00:19:46.968 Transport Requirements: 00:19:46.968 Secure Channel: Not Specified 00:19:46.968 Port ID: 1 (0x0001) 00:19:46.968 Controller ID: 65535 (0xffff) 00:19:46.968 Admin Max SQ Size: 32 00:19:46.968 Transport Service Identifier: 4420 00:19:46.968 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:46.968 Transport Address: 10.0.0.1 00:19:46.968 Discovery Log Entry 1 00:19:46.968 ---------------------- 00:19:46.968 Transport Type: 3 (TCP) 00:19:46.968 Address Family: 1 (IPv4) 00:19:46.968 Subsystem Type: 2 (NVM Subsystem) 00:19:46.968 Entry Flags: 00:19:46.968 Duplicate Returned Information: 0 00:19:46.968 Explicit Persistent Connection Support for Discovery: 0 00:19:46.968 Transport Requirements: 00:19:46.968 Secure Channel: Not Specified 00:19:46.968 Port ID: 1 (0x0001) 00:19:46.968 Controller ID: 65535 (0xffff) 00:19:46.968 Admin Max SQ Size: 32 00:19:46.968 Transport Service Identifier: 4420 00:19:46.968 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:46.968 Transport Address: 10.0.0.1 00:19:46.968 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:47.228 get_feature(0x01) failed 00:19:47.228 get_feature(0x02) failed 00:19:47.228 get_feature(0x04) failed 00:19:47.228 ===================================================== 00:19:47.228 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:47.228 ===================================================== 00:19:47.228 Controller Capabilities/Features 00:19:47.228 ================================ 00:19:47.228 Vendor ID: 0000 00:19:47.228 Subsystem Vendor ID: 0000 00:19:47.228 Serial Number: 506ebf15c1eb9a196948 00:19:47.228 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:47.228 Firmware Version: 6.7.0-68 00:19:47.228 Recommended Arb Burst: 6 00:19:47.228 IEEE OUI Identifier: 00 00 00 00:19:47.228 Multi-path I/O 00:19:47.228 May have multiple subsystem ports: Yes 00:19:47.228 May have multiple controllers: Yes 00:19:47.228 Associated with SR-IOV VF: No 00:19:47.228 Max Data Transfer Size: Unlimited 00:19:47.228 Max Number of Namespaces: 1024 00:19:47.228 Max Number of I/O Queues: 128 00:19:47.228 NVMe Specification Version (VS): 1.3 00:19:47.228 NVMe Specification Version (Identify): 1.3 00:19:47.228 Maximum Queue Entries: 1024 00:19:47.228 Contiguous Queues Required: No 00:19:47.228 Arbitration Mechanisms Supported 00:19:47.228 Weighted Round Robin: Not Supported 00:19:47.228 Vendor Specific: Not Supported 00:19:47.228 Reset Timeout: 7500 ms 00:19:47.228 Doorbell Stride: 4 bytes 00:19:47.228 NVM Subsystem Reset: Not Supported 00:19:47.228 Command Sets Supported 00:19:47.228 NVM Command Set: Supported 00:19:47.228 Boot Partition: Not Supported 00:19:47.228 Memory Page Size Minimum: 4096 bytes 00:19:47.228 Memory Page Size Maximum: 4096 bytes 00:19:47.228 Persistent Memory Region: Not Supported 00:19:47.228 Optional Asynchronous Events Supported 00:19:47.228 Namespace Attribute Notices: Supported 00:19:47.228 Firmware Activation Notices: Not Supported 00:19:47.228 ANA Change Notices: Supported 00:19:47.228 PLE Aggregate Log Change Notices: Not Supported 00:19:47.228 LBA Status Info Alert Notices: Not Supported 00:19:47.228 EGE Aggregate Log Change Notices: Not Supported 00:19:47.228 Normal NVM Subsystem Shutdown event: Not Supported 00:19:47.228 Zone Descriptor Change Notices: Not Supported 00:19:47.228 Discovery Log Change Notices: Not Supported 00:19:47.228 Controller Attributes 00:19:47.228 128-bit Host Identifier: Supported 00:19:47.228 Non-Operational Permissive Mode: Not Supported 00:19:47.228 NVM Sets: Not Supported 00:19:47.228 Read Recovery Levels: Not Supported 00:19:47.228 Endurance Groups: Not Supported 00:19:47.228 Predictable Latency Mode: Not Supported 00:19:47.228 Traffic Based Keep ALive: Supported 00:19:47.228 Namespace Granularity: Not Supported 00:19:47.228 SQ Associations: Not Supported 00:19:47.228 UUID List: Not Supported 00:19:47.228 Multi-Domain Subsystem: Not Supported 00:19:47.228 Fixed Capacity Management: Not Supported 00:19:47.228 Variable Capacity Management: Not Supported 00:19:47.228 Delete Endurance Group: Not Supported 00:19:47.228 Delete NVM Set: Not Supported 00:19:47.228 Extended LBA Formats Supported: Not Supported 00:19:47.228 Flexible Data Placement Supported: Not Supported 00:19:47.228 00:19:47.228 Controller Memory Buffer Support 00:19:47.228 ================================ 00:19:47.228 Supported: No 00:19:47.228 00:19:47.228 Persistent Memory Region Support 00:19:47.228 ================================ 00:19:47.228 Supported: No 00:19:47.228 00:19:47.228 Admin Command Set Attributes 00:19:47.228 ============================ 00:19:47.228 Security Send/Receive: Not Supported 00:19:47.228 Format NVM: Not Supported 00:19:47.228 Firmware Activate/Download: Not Supported 00:19:47.228 Namespace Management: Not Supported 00:19:47.228 Device Self-Test: Not Supported 00:19:47.228 Directives: Not Supported 00:19:47.228 NVMe-MI: Not Supported 00:19:47.228 Virtualization Management: Not Supported 00:19:47.228 Doorbell Buffer Config: Not Supported 00:19:47.228 Get LBA Status Capability: Not Supported 00:19:47.228 Command & Feature Lockdown Capability: Not Supported 00:19:47.228 Abort Command Limit: 4 00:19:47.228 Async Event Request Limit: 4 00:19:47.228 Number of Firmware Slots: N/A 00:19:47.228 Firmware Slot 1 Read-Only: N/A 00:19:47.228 Firmware Activation Without Reset: N/A 00:19:47.228 Multiple Update Detection Support: N/A 00:19:47.228 Firmware Update Granularity: No Information Provided 00:19:47.228 Per-Namespace SMART Log: Yes 00:19:47.228 Asymmetric Namespace Access Log Page: Supported 00:19:47.228 ANA Transition Time : 10 sec 00:19:47.228 00:19:47.228 Asymmetric Namespace Access Capabilities 00:19:47.228 ANA Optimized State : Supported 00:19:47.228 ANA Non-Optimized State : Supported 00:19:47.228 ANA Inaccessible State : Supported 00:19:47.228 ANA Persistent Loss State : Supported 00:19:47.228 ANA Change State : Supported 00:19:47.228 ANAGRPID is not changed : No 00:19:47.228 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:47.228 00:19:47.228 ANA Group Identifier Maximum : 128 00:19:47.228 Number of ANA Group Identifiers : 128 00:19:47.228 Max Number of Allowed Namespaces : 1024 00:19:47.228 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:47.228 Command Effects Log Page: Supported 00:19:47.228 Get Log Page Extended Data: Supported 00:19:47.228 Telemetry Log Pages: Not Supported 00:19:47.228 Persistent Event Log Pages: Not Supported 00:19:47.228 Supported Log Pages Log Page: May Support 00:19:47.228 Commands Supported & Effects Log Page: Not Supported 00:19:47.228 Feature Identifiers & Effects Log Page:May Support 00:19:47.228 NVMe-MI Commands & Effects Log Page: May Support 00:19:47.228 Data Area 4 for Telemetry Log: Not Supported 00:19:47.228 Error Log Page Entries Supported: 128 00:19:47.228 Keep Alive: Supported 00:19:47.228 Keep Alive Granularity: 1000 ms 00:19:47.228 00:19:47.228 NVM Command Set Attributes 00:19:47.228 ========================== 00:19:47.228 Submission Queue Entry Size 00:19:47.228 Max: 64 00:19:47.228 Min: 64 00:19:47.228 Completion Queue Entry Size 00:19:47.228 Max: 16 00:19:47.228 Min: 16 00:19:47.228 Number of Namespaces: 1024 00:19:47.228 Compare Command: Not Supported 00:19:47.228 Write Uncorrectable Command: Not Supported 00:19:47.229 Dataset Management Command: Supported 00:19:47.229 Write Zeroes Command: Supported 00:19:47.229 Set Features Save Field: Not Supported 00:19:47.229 Reservations: Not Supported 00:19:47.229 Timestamp: Not Supported 00:19:47.229 Copy: Not Supported 00:19:47.229 Volatile Write Cache: Present 00:19:47.229 Atomic Write Unit (Normal): 1 00:19:47.229 Atomic Write Unit (PFail): 1 00:19:47.229 Atomic Compare & Write Unit: 1 00:19:47.229 Fused Compare & Write: Not Supported 00:19:47.229 Scatter-Gather List 00:19:47.229 SGL Command Set: Supported 00:19:47.229 SGL Keyed: Not Supported 00:19:47.229 SGL Bit Bucket Descriptor: Not Supported 00:19:47.229 SGL Metadata Pointer: Not Supported 00:19:47.229 Oversized SGL: Not Supported 00:19:47.229 SGL Metadata Address: Not Supported 00:19:47.229 SGL Offset: Supported 00:19:47.229 Transport SGL Data Block: Not Supported 00:19:47.229 Replay Protected Memory Block: Not Supported 00:19:47.229 00:19:47.229 Firmware Slot Information 00:19:47.229 ========================= 00:19:47.229 Active slot: 0 00:19:47.229 00:19:47.229 Asymmetric Namespace Access 00:19:47.229 =========================== 00:19:47.229 Change Count : 0 00:19:47.229 Number of ANA Group Descriptors : 1 00:19:47.229 ANA Group Descriptor : 0 00:19:47.229 ANA Group ID : 1 00:19:47.229 Number of NSID Values : 1 00:19:47.229 Change Count : 0 00:19:47.229 ANA State : 1 00:19:47.229 Namespace Identifier : 1 00:19:47.229 00:19:47.229 Commands Supported and Effects 00:19:47.229 ============================== 00:19:47.229 Admin Commands 00:19:47.229 -------------- 00:19:47.229 Get Log Page (02h): Supported 00:19:47.229 Identify (06h): Supported 00:19:47.229 Abort (08h): Supported 00:19:47.229 Set Features (09h): Supported 00:19:47.229 Get Features (0Ah): Supported 00:19:47.229 Asynchronous Event Request (0Ch): Supported 00:19:47.229 Keep Alive (18h): Supported 00:19:47.229 I/O Commands 00:19:47.229 ------------ 00:19:47.229 Flush (00h): Supported 00:19:47.229 Write (01h): Supported LBA-Change 00:19:47.229 Read (02h): Supported 00:19:47.229 Write Zeroes (08h): Supported LBA-Change 00:19:47.229 Dataset Management (09h): Supported 00:19:47.229 00:19:47.229 Error Log 00:19:47.229 ========= 00:19:47.229 Entry: 0 00:19:47.229 Error Count: 0x3 00:19:47.229 Submission Queue Id: 0x0 00:19:47.229 Command Id: 0x5 00:19:47.229 Phase Bit: 0 00:19:47.229 Status Code: 0x2 00:19:47.229 Status Code Type: 0x0 00:19:47.229 Do Not Retry: 1 00:19:47.229 Error Location: 0x28 00:19:47.229 LBA: 0x0 00:19:47.229 Namespace: 0x0 00:19:47.229 Vendor Log Page: 0x0 00:19:47.229 ----------- 00:19:47.229 Entry: 1 00:19:47.229 Error Count: 0x2 00:19:47.229 Submission Queue Id: 0x0 00:19:47.229 Command Id: 0x5 00:19:47.229 Phase Bit: 0 00:19:47.229 Status Code: 0x2 00:19:47.229 Status Code Type: 0x0 00:19:47.229 Do Not Retry: 1 00:19:47.229 Error Location: 0x28 00:19:47.229 LBA: 0x0 00:19:47.229 Namespace: 0x0 00:19:47.229 Vendor Log Page: 0x0 00:19:47.229 ----------- 00:19:47.229 Entry: 2 00:19:47.229 Error Count: 0x1 00:19:47.229 Submission Queue Id: 0x0 00:19:47.229 Command Id: 0x4 00:19:47.229 Phase Bit: 0 00:19:47.229 Status Code: 0x2 00:19:47.229 Status Code Type: 0x0 00:19:47.229 Do Not Retry: 1 00:19:47.229 Error Location: 0x28 00:19:47.229 LBA: 0x0 00:19:47.229 Namespace: 0x0 00:19:47.229 Vendor Log Page: 0x0 00:19:47.229 00:19:47.229 Number of Queues 00:19:47.229 ================ 00:19:47.229 Number of I/O Submission Queues: 128 00:19:47.229 Number of I/O Completion Queues: 128 00:19:47.229 00:19:47.229 ZNS Specific Controller Data 00:19:47.229 ============================ 00:19:47.229 Zone Append Size Limit: 0 00:19:47.229 00:19:47.229 00:19:47.229 Active Namespaces 00:19:47.229 ================= 00:19:47.229 get_feature(0x05) failed 00:19:47.229 Namespace ID:1 00:19:47.229 Command Set Identifier: NVM (00h) 00:19:47.229 Deallocate: Supported 00:19:47.229 Deallocated/Unwritten Error: Not Supported 00:19:47.229 Deallocated Read Value: Unknown 00:19:47.229 Deallocate in Write Zeroes: Not Supported 00:19:47.229 Deallocated Guard Field: 0xFFFF 00:19:47.229 Flush: Supported 00:19:47.229 Reservation: Not Supported 00:19:47.229 Namespace Sharing Capabilities: Multiple Controllers 00:19:47.229 Size (in LBAs): 1310720 (5GiB) 00:19:47.229 Capacity (in LBAs): 1310720 (5GiB) 00:19:47.229 Utilization (in LBAs): 1310720 (5GiB) 00:19:47.229 UUID: 26ffdef4-1d6c-42c8-8f38-4959a7a8ad45 00:19:47.229 Thin Provisioning: Not Supported 00:19:47.229 Per-NS Atomic Units: Yes 00:19:47.229 Atomic Boundary Size (Normal): 0 00:19:47.229 Atomic Boundary Size (PFail): 0 00:19:47.229 Atomic Boundary Offset: 0 00:19:47.229 NGUID/EUI64 Never Reused: No 00:19:47.229 ANA group ID: 1 00:19:47.229 Namespace Write Protected: No 00:19:47.229 Number of LBA Formats: 1 00:19:47.229 Current LBA Format: LBA Format #00 00:19:47.229 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:47.229 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.229 rmmod nvme_tcp 00:19:47.229 rmmod nvme_fabrics 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:47.229 14:59:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.055 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.055 00:19:48.055 real 0m2.675s 00:19:48.055 user 0m0.939s 00:19:48.055 sys 0m1.278s 00:19:48.055 14:59:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.055 ************************************ 00:19:48.055 END TEST nvmf_identify_kernel_target 00:19:48.055 ************************************ 00:19:48.055 14:59:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.055 14:59:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:48.055 14:59:26 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:48.055 14:59:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:48.055 14:59:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.055 14:59:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.314 ************************************ 00:19:48.315 START TEST nvmf_auth_host 00:19:48.315 ************************************ 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:48.315 * Looking for test storage... 00:19:48.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:48.315 Cannot find device "nvmf_tgt_br" 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.315 Cannot find device "nvmf_tgt_br2" 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:48.315 Cannot find device "nvmf_tgt_br" 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:48.315 Cannot find device "nvmf_tgt_br2" 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.315 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.574 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.574 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.574 14:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:48.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:19:48.574 00:19:48.574 --- 10.0.0.2 ping statistics --- 00:19:48.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.574 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:48.574 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.574 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:48.574 00:19:48.574 --- 10.0.0.3 ping statistics --- 00:19:48.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.574 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:48.574 00:19:48.574 --- 10.0.0.1 ping statistics --- 00:19:48.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.574 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91518 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91518 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91518 ']' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.574 14:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.578 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f2422aab0330258567b80a35d7f95f36 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZJU 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f2422aab0330258567b80a35d7f95f36 0 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f2422aab0330258567b80a35d7f95f36 0 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f2422aab0330258567b80a35d7f95f36 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZJU 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZJU 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZJU 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c97913dd2b467b3465065d46300718b63384fbfe88dd83fa8f78e1fbf1d42c1a 00:19:49.836 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.37q 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c97913dd2b467b3465065d46300718b63384fbfe88dd83fa8f78e1fbf1d42c1a 3 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c97913dd2b467b3465065d46300718b63384fbfe88dd83fa8f78e1fbf1d42c1a 3 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c97913dd2b467b3465065d46300718b63384fbfe88dd83fa8f78e1fbf1d42c1a 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.37q 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.37q 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.37q 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e586d248478a13a67ea985de5bb11e2bdb2809b2b408815 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Plf 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e586d248478a13a67ea985de5bb11e2bdb2809b2b408815 0 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e586d248478a13a67ea985de5bb11e2bdb2809b2b408815 0 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e586d248478a13a67ea985de5bb11e2bdb2809b2b408815 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Plf 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Plf 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Plf 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0de09dd2f61d367c85527c86f6d09f43aeaec73076a9401 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GbU 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0de09dd2f61d367c85527c86f6d09f43aeaec73076a9401 2 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0de09dd2f61d367c85527c86f6d09f43aeaec73076a9401 2 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0de09dd2f61d367c85527c86f6d09f43aeaec73076a9401 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GbU 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GbU 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GbU 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a36da50aabdb7dfa7810132a50c01ede 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hBA 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a36da50aabdb7dfa7810132a50c01ede 1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a36da50aabdb7dfa7810132a50c01ede 1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a36da50aabdb7dfa7810132a50c01ede 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:49.837 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hBA 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hBA 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hBA 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=abaaa56f28fb60bd89363a3799094335 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.73d 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key abaaa56f28fb60bd89363a3799094335 1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 abaaa56f28fb60bd89363a3799094335 1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=abaaa56f28fb60bd89363a3799094335 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.73d 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.73d 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.73d 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=010b30cccab4bd2cdf064923320682e2e1cdab80e50d685a 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8ZL 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 010b30cccab4bd2cdf064923320682e2e1cdab80e50d685a 2 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 010b30cccab4bd2cdf064923320682e2e1cdab80e50d685a 2 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=010b30cccab4bd2cdf064923320682e2e1cdab80e50d685a 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8ZL 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8ZL 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8ZL 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8739f9e5595bd00b0a5b8f3b910d5ff 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iBz 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8739f9e5595bd00b0a5b8f3b910d5ff 0 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8739f9e5595bd00b0a5b8f3b910d5ff 0 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8739f9e5595bd00b0a5b8f3b910d5ff 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iBz 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iBz 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iBz 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8e4b8d85bca00966f8f809e31356ab0552d7988575ecfba18b4f3a3200a2d49 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zdq 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8e4b8d85bca00966f8f809e31356ab0552d7988575ecfba18b4f3a3200a2d49 3 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8e4b8d85bca00966f8f809e31356ab0552d7988575ecfba18b4f3a3200a2d49 3 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8e4b8d85bca00966f8f809e31356ab0552d7988575ecfba18b4f3a3200a2d49 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:50.096 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zdq 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zdq 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zdq 00:19:50.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91518 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91518 ']' 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.354 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.355 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.355 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.355 14:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZJU 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.37q ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.37q 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Plf 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GbU ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GbU 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hBA 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.73d ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.73d 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8ZL 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iBz ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iBz 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zdq 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:50.613 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:50.871 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:50.871 14:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:51.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:51.129 Waiting for block devices as requested 00:19:51.129 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.129 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:51.696 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:51.955 No valid GPT data, bailing 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:51.955 No valid GPT data, bailing 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:51.955 No valid GPT data, bailing 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:51.955 No valid GPT data, bailing 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:51.955 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.1 -t tcp -s 4420 00:19:52.214 00:19:52.214 Discovery Log Number of Records 2, Generation counter 2 00:19:52.214 =====Discovery Log Entry 0====== 00:19:52.214 trtype: tcp 00:19:52.214 adrfam: ipv4 00:19:52.214 subtype: current discovery subsystem 00:19:52.214 treq: not specified, sq flow control disable supported 00:19:52.214 portid: 1 00:19:52.214 trsvcid: 4420 00:19:52.214 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:52.214 traddr: 10.0.0.1 00:19:52.214 eflags: none 00:19:52.214 sectype: none 00:19:52.214 =====Discovery Log Entry 1====== 00:19:52.214 trtype: tcp 00:19:52.214 adrfam: ipv4 00:19:52.214 subtype: nvme subsystem 00:19:52.214 treq: not specified, sq flow control disable supported 00:19:52.214 portid: 1 00:19:52.214 trsvcid: 4420 00:19:52.214 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:52.214 traddr: 10.0.0.1 00:19:52.214 eflags: none 00:19:52.214 sectype: none 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.214 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 nvme0n1 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 nvme0n1 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.473 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.474 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.732 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 nvme0n1 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.733 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 nvme0n1 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.991 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.992 nvme0n1 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.992 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.251 nvme0n1 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.251 14:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.509 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.768 nvme0n1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.768 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 nvme0n1 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.027 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.028 nvme0n1 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.028 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:54.296 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.297 nvme0n1 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.297 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.555 14:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.555 nvme0n1 00:19:54.555 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.556 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.489 nvme0n1 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.489 14:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.489 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.490 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.748 nvme0n1 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.748 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.006 nvme0n1 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:56.006 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.007 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 nvme0n1 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.265 14:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.522 nvme0n1 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.522 14:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.420 14:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 nvme0n1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.986 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.245 nvme0n1 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.245 14:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 nvme0n1 00:19:59.833 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.833 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.834 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 nvme0n1 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.093 14:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.661 nvme0n1 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.661 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.227 nvme0n1 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.227 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.485 14:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.051 nvme0n1 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.051 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.052 14:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.617 nvme0n1 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.617 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.875 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.442 nvme0n1 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.442 14:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.442 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 nvme0n1 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.295 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 nvme0n1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.296 14:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.570 nvme0n1 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.570 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.571 nvme0n1 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.571 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.861 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 nvme0n1 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.862 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.121 nvme0n1 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.121 nvme0n1 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.121 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.380 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.381 nvme0n1 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.381 14:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.381 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.639 nvme0n1 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.639 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.640 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.900 nvme0n1 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.900 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.160 nvme0n1 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.160 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.419 nvme0n1 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.419 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.420 14:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.679 nvme0n1 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.679 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.939 nvme0n1 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.939 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.199 nvme0n1 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.199 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.458 nvme0n1 00:20:07.458 14:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:07.458 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.459 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.024 nvme0n1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.024 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.283 nvme0n1 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.283 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.541 14:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 nvme0n1 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.799 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.800 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.366 nvme0n1 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.366 14:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 nvme0n1 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.625 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.576 nvme0n1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.576 14:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.142 nvme0n1 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.142 14:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.708 nvme0n1 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.708 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.966 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.532 nvme0n1 00:20:12.532 14:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.532 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.532 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.532 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.532 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.533 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.099 nvme0n1 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.099 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.358 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 nvme0n1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.359 14:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.616 nvme0n1 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.616 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.617 nvme0n1 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.617 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 nvme0n1 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.874 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 nvme0n1 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 nvme0n1 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.132 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 nvme0n1 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.390 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.391 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.648 nvme0n1 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.649 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.907 nvme0n1 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.907 nvme0n1 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.907 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.166 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.424 nvme0n1 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.424 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.425 14:59:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.698 nvme0n1 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.698 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.699 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.971 nvme0n1 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.971 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.229 nvme0n1 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.229 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.230 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.488 nvme0n1 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.488 14:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.489 14:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.489 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.489 14:59:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.746 nvme0n1 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.746 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.004 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 nvme0n1 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.263 14:59:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.830 nvme0n1 00:20:17.830 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 nvme0n1 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.348 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.349 14:59:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 nvme0n1 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI0MjJhYWIwMzMwMjU4NTY3YjgwYTM1ZDdmOTVmMzZ6K9aK: 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzk3OTEzZGQyYjQ2N2IzNDY1MDY1ZDQ2MzAwNzE4YjYzMzg0ZmJmZTg4ZGQ4M2ZhOGY3OGUxZmJmMWQ0MmMxYS2w+KU=: 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.540 nvme0n1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.540 14:59:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 nvme0n1 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTM2ZGE1MGFhYmRiN2RmYTc4MTAxMzJhNTBjMDFlZGUVvGN3: 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWJhYWE1NmYyOGZiNjBiZDg5MzYzYTM3OTkwOTQzMzXvVp27: 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 14:59:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.671 nvme0n1 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDEwYjMwY2NjYWI0YmQyY2RmMDY0OTIzMzIwNjgyZTJlMWNkYWI4MGU1MGQ2ODVhcR/BiA==: 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3MzlmOWU1NTk1YmQwMGIwYTViOGYzYjkxMGQ1ZmbWAzcP: 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.671 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 nvme0n1 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlNGI4ZDg1YmNhMDA5NjZmOGY4MDllMzEzNTZhYjA1NTJkNzk4ODU3NWVjZmJhMThiNGYzYTMyMDBhMmQ0OXn6/Ls=: 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.605 14:59:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.605 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.171 nvme0n1 00:20:22.171 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.171 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZkMjQ4NDc4YTEzYTY3ZWE5ODVkZTViYjExZTJiZGIyODA5YjJiNDA4ODE12cpoSQ==: 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBkZTA5ZGQyZjYxZDM2N2M4NTUyN2M4NmY2ZDA5ZjQzYWVhZWM3MzA3NmE5NDAxKMLIlQ==: 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 2024/07/12 15:00:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:22.172 request: 00:20:22.172 { 00:20:22.172 "method": "bdev_nvme_attach_controller", 00:20:22.172 "params": { 00:20:22.172 "name": "nvme0", 00:20:22.172 "trtype": "tcp", 00:20:22.172 "traddr": "10.0.0.1", 00:20:22.172 "adrfam": "ipv4", 00:20:22.172 "trsvcid": "4420", 00:20:22.172 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:22.172 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:22.172 "prchk_reftag": false, 00:20:22.172 "prchk_guard": false, 00:20:22.172 "hdgst": false, 00:20:22.172 "ddgst": false 00:20:22.172 } 00:20:22.172 } 00:20:22.172 Got JSON-RPC error response 00:20:22.172 GoRPCClient: error on JSON-RPC call 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.172 2024/07/12 15:00:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:22.172 request: 00:20:22.172 { 00:20:22.172 "method": "bdev_nvme_attach_controller", 00:20:22.172 "params": { 00:20:22.172 "name": "nvme0", 00:20:22.172 "trtype": "tcp", 00:20:22.172 "traddr": "10.0.0.1", 00:20:22.172 "adrfam": "ipv4", 00:20:22.172 "trsvcid": "4420", 00:20:22.172 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:22.172 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:22.172 "prchk_reftag": false, 00:20:22.172 "prchk_guard": false, 00:20:22.172 "hdgst": false, 00:20:22.172 "ddgst": false, 00:20:22.172 "dhchap_key": "key2" 00:20:22.172 } 00:20:22.172 } 00:20:22.172 Got JSON-RPC error response 00:20:22.172 GoRPCClient: error on JSON-RPC call 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:22.172 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.430 2024/07/12 15:00:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:22.430 request: 00:20:22.430 { 00:20:22.430 "method": "bdev_nvme_attach_controller", 00:20:22.430 "params": { 00:20:22.430 "name": "nvme0", 00:20:22.430 "trtype": "tcp", 00:20:22.430 "traddr": "10.0.0.1", 00:20:22.430 "adrfam": "ipv4", 00:20:22.430 "trsvcid": "4420", 00:20:22.430 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:22.430 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:22.430 "prchk_reftag": false, 00:20:22.430 "prchk_guard": false, 00:20:22.430 "hdgst": false, 00:20:22.430 "ddgst": false, 00:20:22.430 "dhchap_key": "key1", 00:20:22.430 "dhchap_ctrlr_key": "ckey2" 00:20:22.430 } 00:20:22.430 } 00:20:22.430 Got JSON-RPC error response 00:20:22.430 GoRPCClient: error on JSON-RPC call 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.430 rmmod nvme_tcp 00:20:22.430 rmmod nvme_fabrics 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91518 ']' 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91518 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91518 ']' 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91518 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91518 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.430 killing process with pid 91518 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.430 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91518' 00:20:22.431 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91518 00:20:22.431 15:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91518 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:22.688 15:00:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.509 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:23.509 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:23.509 15:00:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZJU /tmp/spdk.key-null.Plf /tmp/spdk.key-sha256.hBA /tmp/spdk.key-sha384.8ZL /tmp/spdk.key-sha512.zdq /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:23.509 15:00:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.768 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.768 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:24.026 00:20:24.026 real 0m35.738s 00:20:24.026 user 0m32.094s 00:20:24.026 sys 0m3.521s 00:20:24.026 15:00:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.026 15:00:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.026 ************************************ 00:20:24.026 END TEST nvmf_auth_host 00:20:24.026 ************************************ 00:20:24.026 15:00:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:24.026 15:00:02 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:24.026 15:00:02 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:24.026 15:00:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:24.026 15:00:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.026 15:00:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.026 ************************************ 00:20:24.026 START TEST nvmf_digest 00:20:24.026 ************************************ 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:24.026 * Looking for test storage... 00:20:24.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.026 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:24.027 Cannot find device "nvmf_tgt_br" 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.027 Cannot find device "nvmf_tgt_br2" 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:24.027 Cannot find device "nvmf_tgt_br" 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:24.027 Cannot find device "nvmf_tgt_br2" 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:24.027 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.284 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:24.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:24.541 00:20:24.541 --- 10.0.0.2 ping statistics --- 00:20:24.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.541 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:24.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:24.541 00:20:24.541 --- 10.0.0.3 ping statistics --- 00:20:24.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.541 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:24.541 00:20:24.541 --- 10.0.0.1 ping statistics --- 00:20:24.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.541 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.541 ************************************ 00:20:24.541 START TEST nvmf_digest_clean 00:20:24.541 ************************************ 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93111 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93111 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93111 ']' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.541 15:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:24.541 [2024-07-12 15:00:03.056801] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:24.541 [2024-07-12 15:00:03.056896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.797 [2024-07-12 15:00:03.196020] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.798 [2024-07-12 15:00:03.266098] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.798 [2024-07-12 15:00:03.266158] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.798 [2024-07-12 15:00:03.266172] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.798 [2024-07-12 15:00:03.266182] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.798 [2024-07-12 15:00:03.266191] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.798 [2024-07-12 15:00:03.266219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.729 null0 00:20:25.729 [2024-07-12 15:00:04.165802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.729 [2024-07-12 15:00:04.189891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93167 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93167 /var/tmp/bperf.sock 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93167 ']' 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.729 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.730 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.730 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.730 15:00:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.730 [2024-07-12 15:00:04.253970] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:25.730 [2024-07-12 15:00:04.254083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93167 ] 00:20:25.987 [2024-07-12 15:00:04.394881] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.987 [2024-07-12 15:00:04.455444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.933 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.496 nvme0n1 00:20:27.496 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.496 15:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:27.496 Running I/O for 2 seconds... 00:20:29.409 00:20:29.409 Latency(us) 00:20:29.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.409 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:29.409 nvme0n1 : 2.00 18022.83 70.40 0.00 0.00 7093.54 3217.22 17039.36 00:20:29.409 =================================================================================================================== 00:20:29.409 Total : 18022.83 70.40 0.00 0.00 7093.54 3217.22 17039.36 00:20:29.409 0 00:20:29.409 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.409 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.409 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.409 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.409 | select(.opcode=="crc32c") 00:20:29.409 | "\(.module_name) \(.executed)"' 00:20:29.409 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93167 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93167 ']' 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93167 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93167 00:20:29.976 killing process with pid 93167 00:20:29.976 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.976 00:20:29.976 Latency(us) 00:20:29.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.976 =================================================================================================================== 00:20:29.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93167' 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93167 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93167 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93252 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93252 /var/tmp/bperf.sock 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93252 ']' 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:29.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.976 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.976 [2024-07-12 15:00:08.592406] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:29.976 [2024-07-12 15:00:08.592712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93252 ] 00:20:29.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:29.976 Zero copy mechanism will not be used. 00:20:30.234 [2024-07-12 15:00:08.736013] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.234 [2024-07-12 15:00:08.795852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.234 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.234 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:30.234 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.234 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.234 15:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:30.798 15:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.798 15:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.054 nvme0n1 00:20:31.054 15:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:31.054 15:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:31.054 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:31.054 Zero copy mechanism will not be used. 00:20:31.054 Running I/O for 2 seconds... 00:20:33.582 00:20:33.582 Latency(us) 00:20:33.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.582 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:33.582 nvme0n1 : 2.00 7557.04 944.63 0.00 0.00 2113.27 539.93 7030.23 00:20:33.582 =================================================================================================================== 00:20:33.583 Total : 7557.04 944.63 0.00 0.00 2113.27 539.93 7030.23 00:20:33.583 0 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.583 | select(.opcode=="crc32c") 00:20:33.583 | "\(.module_name) \(.executed)"' 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93252 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93252 ']' 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93252 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93252 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:33.583 killing process with pid 93252 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93252' 00:20:33.583 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.583 00:20:33.583 Latency(us) 00:20:33.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.583 =================================================================================================================== 00:20:33.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93252 00:20:33.583 15:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93252 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93323 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93323 /var/tmp/bperf.sock 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93323 ']' 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.583 15:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:33.583 [2024-07-12 15:00:12.204502] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:33.583 [2024-07-12 15:00:12.204660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93323 ] 00:20:33.841 [2024-07-12 15:00:12.350711] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.841 [2024-07-12 15:00:12.409927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.773 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.773 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:34.773 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:34.773 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:34.773 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:35.030 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.030 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.287 nvme0n1 00:20:35.544 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:35.544 15:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.544 Running I/O for 2 seconds... 00:20:38.071 00:20:38.071 Latency(us) 00:20:38.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.071 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:38.071 nvme0n1 : 2.00 21367.08 83.47 0.00 0.00 5980.67 2532.07 15966.95 00:20:38.071 =================================================================================================================== 00:20:38.071 Total : 21367.08 83.47 0.00 0.00 5980.67 2532.07 15966.95 00:20:38.071 0 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:38.071 | select(.opcode=="crc32c") 00:20:38.071 | "\(.module_name) \(.executed)"' 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:38.071 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93323 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93323 ']' 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93323 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93323 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:38.072 killing process with pid 93323 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93323' 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93323 00:20:38.072 Received shutdown signal, test time was about 2.000000 seconds 00:20:38.072 00:20:38.072 Latency(us) 00:20:38.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.072 =================================================================================================================== 00:20:38.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93323 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93420 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93420 /var/tmp/bperf.sock 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93420 ']' 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.072 15:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:38.072 [2024-07-12 15:00:16.660626] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:38.072 [2024-07-12 15:00:16.660777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93420 ] 00:20:38.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.072 Zero copy mechanism will not be used. 00:20:38.330 [2024-07-12 15:00:16.803929] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.330 [2024-07-12 15:00:16.862804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.263 15:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.263 15:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:39.263 15:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:39.263 15:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:39.263 15:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:39.520 15:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.520 15:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.778 nvme0n1 00:20:39.778 15:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:39.778 15:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:39.778 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:39.778 Zero copy mechanism will not be used. 00:20:39.778 Running I/O for 2 seconds... 00:20:42.320 00:20:42.320 Latency(us) 00:20:42.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.320 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:42.320 nvme0n1 : 2.00 6169.20 771.15 0.00 0.00 2587.76 2085.24 8936.73 00:20:42.320 =================================================================================================================== 00:20:42.320 Total : 6169.20 771.15 0.00 0.00 2587.76 2085.24 8936.73 00:20:42.320 0 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:42.320 | select(.opcode=="crc32c") 00:20:42.320 | "\(.module_name) \(.executed)"' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93420 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93420 ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93420 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93420 00:20:42.320 killing process with pid 93420 00:20:42.320 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.320 00:20:42.320 Latency(us) 00:20:42.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.320 =================================================================================================================== 00:20:42.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93420' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93420 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93420 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93111 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93111 ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93111 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93111 00:20:42.320 killing process with pid 93111 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93111' 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93111 00:20:42.320 15:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93111 00:20:42.577 ************************************ 00:20:42.577 END TEST nvmf_digest_clean 00:20:42.577 ************************************ 00:20:42.577 00:20:42.577 real 0m18.141s 00:20:42.577 user 0m35.388s 00:20:42.577 sys 0m4.321s 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.577 ************************************ 00:20:42.577 START TEST nvmf_digest_error 00:20:42.577 ************************************ 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93533 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93533 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93533 ']' 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.577 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.834 [2024-07-12 15:00:21.230692] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:42.834 [2024-07-12 15:00:21.230778] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.834 [2024-07-12 15:00:21.368429] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.834 [2024-07-12 15:00:21.437669] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.834 [2024-07-12 15:00:21.437736] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.834 [2024-07-12 15:00:21.437750] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.834 [2024-07-12 15:00:21.437760] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.834 [2024-07-12 15:00:21.437769] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.835 [2024-07-12 15:00:21.437798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.092 [2024-07-12 15:00:21.562235] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.092 null0 00:20:43.092 [2024-07-12 15:00:21.642216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.092 [2024-07-12 15:00:21.666375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93566 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93566 /var/tmp/bperf.sock 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93566 ']' 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.092 15:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.092 [2024-07-12 15:00:21.721605] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:43.092 [2024-07-12 15:00:21.721696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93566 ] 00:20:43.360 [2024-07-12 15:00:21.853577] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.360 [2024-07-12 15:00:21.924106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.643 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.906 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.906 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.906 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.163 nvme0n1 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:44.163 15:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.163 Running I/O for 2 seconds... 00:20:44.421 [2024-07-12 15:00:22.832813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.832878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.848105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.848155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.848171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.862579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.862627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.862642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.876640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.876700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.890101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.890144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.890159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.901589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.901629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.901644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.916160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.916202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.916217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.931083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.931129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.931144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.945189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.945233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.945248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.959205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.959248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.959262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.973684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.973726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.973741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:22.988136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:22.988177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:22.988192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.001894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.001936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.001951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.014041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.014083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.014097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.028094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.028154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.042620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.042668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.042683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.056577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.056636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.056651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.422 [2024-07-12 15:00:23.070039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.422 [2024-07-12 15:00:23.070080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.422 [2024-07-12 15:00:23.070096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.085606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.085653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.085668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.099399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.099443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.099458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.111692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.111735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.111749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.125048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.125090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.125105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.140401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.140443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.140458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.153272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.153335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.168200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.168252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.168267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.181240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.181298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.195420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.195467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.195483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.210382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.210430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.210444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.224502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.224554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.224570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.236791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.251351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.251404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.251420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.264468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.264509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.264538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.280459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.280501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.280529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.293660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.293702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.293717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.310011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.310052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.310066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.680 [2024-07-12 15:00:23.322434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.680 [2024-07-12 15:00:23.322478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.680 [2024-07-12 15:00:23.322493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.338181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.338226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.338241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.351836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.351878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.351893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.363667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.363708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.363723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.378582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.378622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.378637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.393153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.393194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.393209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.407347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.407390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.407406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.422080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.422124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.947 [2024-07-12 15:00:23.422141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.947 [2024-07-12 15:00:23.436220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.947 [2024-07-12 15:00:23.436279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.436295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.448160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.448201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.448215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.463234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.463279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.463294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.480496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.480567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.480584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.498348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.498392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.513870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.513938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.513954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.533304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.533365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.533381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.547921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.547964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.547978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.565385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.565446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.565462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.948 [2024-07-12 15:00:23.580811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:44.948 [2024-07-12 15:00:23.580875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.948 [2024-07-12 15:00:23.580891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.600257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.600306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.600322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.618178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.618226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.618242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.632436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.632479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.632494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.650545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.650588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.650603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.665166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.665209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.665225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.680849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.680921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.680937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.695009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.695057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.695072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.709651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.723467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.723511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.723545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.735489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.735576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.751740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.751804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.751831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.768567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.768652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.783147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.783210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.783235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.799145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.799191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.799207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.814303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.814368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.210 [2024-07-12 15:00:23.828743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.210 [2024-07-12 15:00:23.828794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.210 [2024-07-12 15:00:23.828810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.211 [2024-07-12 15:00:23.841636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.211 [2024-07-12 15:00:23.841680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.211 [2024-07-12 15:00:23.841695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.211 [2024-07-12 15:00:23.856196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.211 [2024-07-12 15:00:23.856251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.211 [2024-07-12 15:00:23.856267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.872060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.872106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.872121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.886214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.886257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.886272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.897794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.897835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.897850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.911793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.911835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.927110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.927181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.927197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.941483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.941539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.941557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.953319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.953362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.953377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.967166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.967208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.967223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.982723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.982768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.982783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:23.998004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:23.998049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:23.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.011137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.011181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.011196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.025370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.025441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.025458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.038865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.038908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.053971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.054019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.054035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.068036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.068079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.068094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.080878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.080919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.080934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.096003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.096079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.096096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.469 [2024-07-12 15:00:24.110025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.469 [2024-07-12 15:00:24.110068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.469 [2024-07-12 15:00:24.110083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.124627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.124669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.124685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.139917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.139963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.139978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.155501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.155564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.170190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.170259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.185777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.185845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.185861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.199332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.199391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.199407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.212424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.212469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.212483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.227160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.227232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.227249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.240465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.240542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.255228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.255298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.267849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.267891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.267906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.282035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.282078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.282093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.296601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.296649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.296665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.311534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.311579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.311594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.325775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.325832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.325848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.339796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.339852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.339868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.353339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.353390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.353406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.727 [2024-07-12 15:00:24.367890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.727 [2024-07-12 15:00:24.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.727 [2024-07-12 15:00:24.367952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.383103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.383151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.383166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.397743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.397785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.397800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.411146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.411196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.411212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.426103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.426156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.426171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.441212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.441266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.441285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.452438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.452483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.452498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.467911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.467970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.467986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.483435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.483481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.483496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.497714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.497757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.497772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.511126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.511168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.511182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.523789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.523830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.523845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.538439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.538480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.538496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.552215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.552283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.565320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.565363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.565379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.580721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.580766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.985 [2024-07-12 15:00:24.580782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.985 [2024-07-12 15:00:24.595172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.985 [2024-07-12 15:00:24.595230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.986 [2024-07-12 15:00:24.595257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.986 [2024-07-12 15:00:24.610093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.986 [2024-07-12 15:00:24.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.986 [2024-07-12 15:00:24.610182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.986 [2024-07-12 15:00:24.623777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.986 [2024-07-12 15:00:24.623829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.986 [2024-07-12 15:00:24.623845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.986 [2024-07-12 15:00:24.638483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:45.986 [2024-07-12 15:00:24.638543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.986 [2024-07-12 15:00:24.638560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.243 [2024-07-12 15:00:24.653479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.243 [2024-07-12 15:00:24.653545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.243 [2024-07-12 15:00:24.653563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.243 [2024-07-12 15:00:24.668659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.243 [2024-07-12 15:00:24.668723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.243 [2024-07-12 15:00:24.668746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.243 [2024-07-12 15:00:24.683167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.243 [2024-07-12 15:00:24.683217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.243 [2024-07-12 15:00:24.683232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.243 [2024-07-12 15:00:24.697664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.243 [2024-07-12 15:00:24.697713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.243 [2024-07-12 15:00:24.697730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.712450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.712510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.712544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.724824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.724866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.724881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.739162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.739206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.739221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.753386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.753433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.753448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.765815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.765858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.765873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.783376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.783429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.796345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 [2024-07-12 15:00:24.811051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19cee10) 00:20:46.244 [2024-07-12 15:00:24.811106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.244 [2024-07-12 15:00:24.811122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.244 00:20:46.244 Latency(us) 00:20:46.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.244 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:46.244 nvme0n1 : 2.01 17650.33 68.95 0.00 0.00 7243.47 3664.06 20375.74 00:20:46.244 =================================================================================================================== 00:20:46.244 Total : 17650.33 68.95 0.00 0.00 7243.47 3664.06 20375.74 00:20:46.244 0 00:20:46.244 15:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:46.244 15:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:46.244 15:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:46.244 15:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:46.244 | .driver_specific 00:20:46.244 | .nvme_error 00:20:46.244 | .status_code 00:20:46.244 | .command_transient_transport_error' 00:20:46.501 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93566 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93566 ']' 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93566 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.502 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93566 00:20:46.761 killing process with pid 93566 00:20:46.761 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.761 00:20:46.761 Latency(us) 00:20:46.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.761 =================================================================================================================== 00:20:46.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93566' 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93566 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93566 00:20:46.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93637 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93637 /var/tmp/bperf.sock 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93637 ']' 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.761 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.761 [2024-07-12 15:00:25.373560] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:46.761 [2024-07-12 15:00:25.373857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93637 ] 00:20:46.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.761 Zero copy mechanism will not be used. 00:20:47.017 [2024-07-12 15:00:25.508251] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.017 [2024-07-12 15:00:25.567302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.017 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.017 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:47.017 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.017 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.581 15:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.839 nvme0n1 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:47.839 15:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:47.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:47.839 Zero copy mechanism will not be used. 00:20:47.839 Running I/O for 2 seconds... 00:20:47.839 [2024-07-12 15:00:26.473011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:47.839 [2024-07-12 15:00:26.473081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.839 [2024-07-12 15:00:26.473105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.839 [2024-07-12 15:00:26.478701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:47.839 [2024-07-12 15:00:26.478749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.839 [2024-07-12 15:00:26.478765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.839 [2024-07-12 15:00:26.483541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:47.839 [2024-07-12 15:00:26.483587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.839 [2024-07-12 15:00:26.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.839 [2024-07-12 15:00:26.487762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:47.840 [2024-07-12 15:00:26.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.840 [2024-07-12 15:00:26.487818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.840 [2024-07-12 15:00:26.491375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:47.840 [2024-07-12 15:00:26.491416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.840 [2024-07-12 15:00:26.491431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.496040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.496081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.496095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.500976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.501021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.501036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.504898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.504962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.504977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.509267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.509342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.509357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.513781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.513846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.513861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.517851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.517924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.521438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.521493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.521509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.525665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.525725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.525740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.529958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.530022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.530038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.533825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.533883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.533898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.537636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.537697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.537712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.541738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.541806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.541822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.546137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.546215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.546231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.549413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.549466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.549481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.553696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.553752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.558062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.558107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.561124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.561164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.561178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.565511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.565569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.565584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.569621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.569662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.569676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.100 [2024-07-12 15:00:26.573804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.100 [2024-07-12 15:00:26.573866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.100 [2024-07-12 15:00:26.573882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.577231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.577287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.577302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.581509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.581577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.585447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.585503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.585531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.589881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.589939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.589955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.594115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.594173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.594189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.598567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.598625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.598640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.602479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.602546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.602563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.606373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.606431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.610553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.610612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.610628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.614756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.614815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.614830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.618290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.618347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.618362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.622408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.622482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.622498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.626463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.626536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.626554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.629803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.629844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.629858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.634085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.634126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.634141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.638720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.638762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.638776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.642702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.642744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.642759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.646504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.646559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.646574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.650546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.650597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.650610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.655058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.655128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.655143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.659306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.659380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.663002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.663052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.663067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.667669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.667711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.667725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.672639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.672681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.672695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.677666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.677707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.677721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.680555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.680591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.684467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.684509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.684539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.688259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.688317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.688332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.101 [2024-07-12 15:00:26.692353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.101 [2024-07-12 15:00:26.692416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.101 [2024-07-12 15:00:26.692431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.696377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.696433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.700738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.700795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.704734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.704774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.704788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.708458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.708509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.708538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.712709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.712770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.712785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.716677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.716716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.716730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.720908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.720952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.720967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.724751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.724791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.724804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.728591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.728631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.728656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.732272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.732312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.732326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.736780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.736820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.736834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.740596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.740637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.740651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.744309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.744350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.744364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.748724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.748765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.748779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.102 [2024-07-12 15:00:26.751994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.102 [2024-07-12 15:00:26.752048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.102 [2024-07-12 15:00:26.752062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.362 [2024-07-12 15:00:26.756262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.362 [2024-07-12 15:00:26.756304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.362 [2024-07-12 15:00:26.756317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.362 [2024-07-12 15:00:26.760643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.362 [2024-07-12 15:00:26.760684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.362 [2024-07-12 15:00:26.760698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.763944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.763997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.768658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.768698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.768712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.773245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.773286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.773300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.776772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.776811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.776825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.781036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.781075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.781090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.784906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.784960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.788047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.788087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.788101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.792162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.792202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.792216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.796663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.796702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.800875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.800915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.800929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.804349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.804388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.807551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.807587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.807601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.811218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.811258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.811272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.815815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.815869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.818922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.818961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.818975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.823078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.823119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.823133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.827285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.827329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.827343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.831231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.831272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.831286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.834570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.834610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.834624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.838773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.838814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.838828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.842442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.842483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.842496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.846721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.851181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.851224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.851238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.855223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.855265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.859689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.859732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.859746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.863789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.863831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.867655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.867696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.867710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.871819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.871860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.871874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.875362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.363 [2024-07-12 15:00:26.875403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.363 [2024-07-12 15:00:26.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.363 [2024-07-12 15:00:26.879161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.879201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.879215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.883321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.883362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.883376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.886961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.887003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.887017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.891269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.891325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.895823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.895886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.898872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.898917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.898931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.903092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.903138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.903154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.908102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.908148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.908163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.911623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.911664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.915665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.915707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.915721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.919625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.919670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.919684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.924299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.924340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.924354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.928469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.928526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.932084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.932128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.932142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.936802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.936860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.940638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.940679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.944687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.944729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.948768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.948809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.953021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.953061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.953075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.956290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.956329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.956343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.960638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.960679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.960693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.965378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.965420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.965434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.968659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.968698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.968712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.973261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.973303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.973317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.977343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.977385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.977400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.981240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.981281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.981295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.985020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.985063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.985077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.989835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.364 [2024-07-12 15:00:26.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.364 [2024-07-12 15:00:26.989903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.364 [2024-07-12 15:00:26.993494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:26.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:26.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.365 [2024-07-12 15:00:26.997566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:26.997608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:26.997622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.365 [2024-07-12 15:00:27.001880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:27.001927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:27.001942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.365 [2024-07-12 15:00:27.005539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:27.005590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:27.005604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.365 [2024-07-12 15:00:27.010223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:27.010266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:27.010280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.365 [2024-07-12 15:00:27.013599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.365 [2024-07-12 15:00:27.013639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.365 [2024-07-12 15:00:27.013653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.017865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.017906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.017920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.021722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.021763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.021777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.025497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.025549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.025563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.029883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.029923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.033360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.033399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.033413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.037345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.037386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.037401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.041534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.041575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.041590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.045357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.049232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.049272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.049286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.053410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.053451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.053465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.057800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.057844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.061370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.061417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.061431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.066027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.066070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.066084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.069270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.069311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.069325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.073572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.073614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.073628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.078049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.082877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.082918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.082932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.087086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.087131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.087145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.090476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.090538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.090554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.095414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.095484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.095499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.099677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.099721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.099735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.103508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.103562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.626 [2024-07-12 15:00:27.103577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.626 [2024-07-12 15:00:27.107677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.626 [2024-07-12 15:00:27.107731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.107746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.112292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.112358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.112374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.116825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.116866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.116880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.119698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.119737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.119751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.124330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.124388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.124403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.128349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.128403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.128419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.131785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.131834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.131848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.136112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.136168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.136183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.140568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.140622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.140636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.144770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.144815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.144830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.147891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.147944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.152060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.152116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.152131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.157043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.157103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.157118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.160788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.160828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.160842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.165284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.165327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.168839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.168885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.168899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.172460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.172502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.172529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.176344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.176410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.180166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.180205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.180219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.184571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.184617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.184631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.188363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.188434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.188450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.192111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.192163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.192178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.196417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.196467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.196482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.200339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.200393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.200408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.204361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.204415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.204429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.208327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.208376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.208391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.212610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.212651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.212664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.216253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.216306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.216321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.220784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.220839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.220854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.627 [2024-07-12 15:00:27.225404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.627 [2024-07-12 15:00:27.225457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.627 [2024-07-12 15:00:27.225472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.229623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.229676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.229691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.233831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.233894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.237860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.237900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.237914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.242096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.242141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.242155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.245872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.245910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.245924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.249412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.249451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.249465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.253473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.253528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.253543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.257926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.257967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.257981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.261450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.261491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.261505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.265959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.266001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.266014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.270320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.270361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.270375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.628 [2024-07-12 15:00:27.274422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.628 [2024-07-12 15:00:27.274461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.628 [2024-07-12 15:00:27.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.277922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.277963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.277983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.282510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.282578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.282593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.286096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.286143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.286157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.290770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.290810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.290824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.294540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.294577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.294590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.298310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.298350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.298364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.302480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.302528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.302544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.306254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.306294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.306308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.310302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.310343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.310357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.314832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.314871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.314885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.318228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.318267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.318281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.322053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.322093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.322107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.325542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.325581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.325595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.329963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.330005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.330019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.333696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.333742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.333756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.337110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.337170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.337185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.341554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.341596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.341610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.345960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.346000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.346014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.349805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.349855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.349870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.353338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.353390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.357985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.358044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.358060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.361574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.361613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.361627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.365385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.365425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.369497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.369547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.369562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.373459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.888 [2024-07-12 15:00:27.373499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.888 [2024-07-12 15:00:27.373526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.888 [2024-07-12 15:00:27.376879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.376930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.381273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.381328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.385128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.385169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.385184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.389852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.389927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.393308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.393370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.397475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.397542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.397558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.401416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.401472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.401488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.405144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.405195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.409401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.409455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.409470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.413448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.413500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.413527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.417182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.417228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.417242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.421710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.421767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.421782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.425951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.426006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.426021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.429945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.429995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.430009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.434060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.434112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.438310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.438360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.438375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.442253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.442306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.442321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.446548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.446603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.446619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.450437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.450486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.450501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.454976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.455030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.455045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.459055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.459107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.459121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.463462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.463536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.463553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.467408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.467466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.467481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.471578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.471633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.471648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.475912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.475967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.475982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.480180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.480231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.480256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.483896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.483942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.483957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.488006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.488061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.488075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.492140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.889 [2024-07-12 15:00:27.492200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.889 [2024-07-12 15:00:27.492216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.889 [2024-07-12 15:00:27.496168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.496219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.496233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.500672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.500730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.500745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.504602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.504657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.508638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.508690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.508705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.512883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.512954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.516582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.516627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.516641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.520545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.520583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.520598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.524507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.524565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.524579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.528826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.528865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.528879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.533707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.533747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.533761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.536935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.890 [2024-07-12 15:00:27.536990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.890 [2024-07-12 15:00:27.540629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:48.890 [2024-07-12 15:00:27.540671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.540685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.544043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.544083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.544097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.548335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.548375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.548389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.553122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.553162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.553176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.557635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.557677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.557692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.560827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.560865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.565042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.565081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.565096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.569705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.569745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.569759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.573270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.573310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.577583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.577622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.581452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.581492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.581506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.585910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.585949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.590145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.590185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.590199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.593810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.593855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.593869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.598262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.598302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.598316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.602009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.602049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.602063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.605888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.605928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.605942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.609641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.609680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.609694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.614037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.614079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.614093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.617868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.617916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.617931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.621526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.621565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.621579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.626497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.626564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.630989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.631029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.631043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.634828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.634868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.154 [2024-07-12 15:00:27.634882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.154 [2024-07-12 15:00:27.638720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.154 [2024-07-12 15:00:27.638760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.638774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.643094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.643135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.643150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.648107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.648151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.648166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.651434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.651476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.651491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.656272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.656314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.656328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.660915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.660956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.660971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.664197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.664247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.664262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.668366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.668409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.668424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.673110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.673150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.676614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.676653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.676667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.681159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.681203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.681218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.685666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.685723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.685740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.689490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.689563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.689580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.694171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.694213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.698097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.698138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.698151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.702355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.702396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.702410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.706251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.706292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.706306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.710204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.710260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.714163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.714205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.718651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.718692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.718706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.721996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.722037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.722051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.726250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.726294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.726308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.729636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.729676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.729690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.732879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.732919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.732933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.736862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.736916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.740355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.740409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.744670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.744711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.744724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.748058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.748098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.748112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.751986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.155 [2024-07-12 15:00:27.752027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.155 [2024-07-12 15:00:27.752041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.155 [2024-07-12 15:00:27.756061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.756101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.756115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.759392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.759432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.759446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.763174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.763214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.763228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.766903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.766945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.766960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.770954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.770995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.771009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.774803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.774843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.774858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.779065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.779105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.779119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.782838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.782878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.786258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.786298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.786312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.789957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.789997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.790011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.794080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.794120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.794134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.797995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.798038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.798053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.156 [2024-07-12 15:00:27.802832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.156 [2024-07-12 15:00:27.802892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.156 [2024-07-12 15:00:27.802917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.808977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.809041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.809062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.813882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.813939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.813960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.819645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.819689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.819703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.824667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.824709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.824724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.830288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.830347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.830368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.834458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.834528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.834552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.840783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.840838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.840853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.845290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.845329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.845343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.850276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.850332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.854399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.854453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.857830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.857872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.857887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.862688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.862728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.862742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.867800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.867841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.867855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.871926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.871981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.872005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.877893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.877949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.436 [2024-07-12 15:00:27.877972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.436 [2024-07-12 15:00:27.883301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.436 [2024-07-12 15:00:27.883344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.883359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.889402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.889457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.889474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.895969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.896020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.896042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.901299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.901341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.901356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.904751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.904790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.904805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.909011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.909055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.909069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.912588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.912628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.912643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.917038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.917081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.917096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.920884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.920924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.920938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.924568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.924610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.924624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.928487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.928540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.928555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.932528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.932563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.932577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.936435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.936476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.936490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.940120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.944790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.944832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.944846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.949294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.949342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.949357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.953225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.953267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.953282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.957304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.957344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.957359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.961714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.961755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.961769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.965051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.965091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.965105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.969792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.969833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.969847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.973096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.973136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.973156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.977492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.977545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.977559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.981412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.981452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.981466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.985270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.985310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.985324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.989635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.989676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.989690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.992994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.993032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.993046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:27.997584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:27.997618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:27.997632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:28.002655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.437 [2024-07-12 15:00:28.002695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.437 [2024-07-12 15:00:28.002709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.437 [2024-07-12 15:00:28.007301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.007341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.007355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.011322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.011361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.011375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.014325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.014368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.014382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.018133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.018174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.018187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.021861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.021903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.021916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.026165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.026205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.026219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.030819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.030859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.030874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.035309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.035348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.035361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.038390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.038429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.038443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.042472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.042527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.042543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.046959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.046999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.047013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.049900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.049940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.049954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.054763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.054804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.054818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.059272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.059313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.059327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.062052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.062091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.062105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.066374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.066415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.066428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.071098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.071139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.071154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.075557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.075599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.075613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.078350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.078393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.078407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.083285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.083326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.083340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.438 [2024-07-12 15:00:28.086625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.438 [2024-07-12 15:00:28.086669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.438 [2024-07-12 15:00:28.086684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.091003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.091057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.096304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.096347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.096361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.099533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.099571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.099597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.103663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.103703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.103717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.108413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.108457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.108472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.112373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.112415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.112428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.116042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.116083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.116097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.119916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.119956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.119970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.123565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.123605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.123619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.127774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.127815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.127829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.131836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.131876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.131890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.698 [2024-07-12 15:00:28.135374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.698 [2024-07-12 15:00:28.135415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.698 [2024-07-12 15:00:28.135428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.139798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.139840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.139853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.143817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.143858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.143872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.147133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.147174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.147188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.151312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.151352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.151367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.155804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.155860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.155877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.159535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.159577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.159592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.164101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.164143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.164157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.169072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.169129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.169146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.172965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.173023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.173041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.177489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.177551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.177565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.182471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.182530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.182546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.186041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.186082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.186096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.189228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.189270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.189284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.193119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.193160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.197722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.197763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.197778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.201353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.201407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.205598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.205635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.205650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.210665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.210707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.210722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.214636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.214679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.214693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.218464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.218509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.218546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.222817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.222858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.222872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.226158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.226198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.226212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.230255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.230296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.230310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.234209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.234250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.699 [2024-07-12 15:00:28.234264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.699 [2024-07-12 15:00:28.237833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.699 [2024-07-12 15:00:28.237873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.237886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.242179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.242220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.242235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.245812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.245850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.245864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.249342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.249379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.249393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.253620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.253672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.257195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.257234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.261321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.261362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.261376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.265139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.265179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.265193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.269112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.269152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.273309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.273350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.277203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.277257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.281016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.281058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.281072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.284187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.284226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.284248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.288358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.288398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.288412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.292291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.292331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.292345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.296704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.296745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.296759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.300198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.300245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.300260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.305164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.305206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.305220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.309635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.309674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.309688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.312744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.312784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.312798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.317051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.317089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.317103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.320894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.700 [2024-07-12 15:00:28.320936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.700 [2024-07-12 15:00:28.320949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.700 [2024-07-12 15:00:28.324765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.324805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.324819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.328957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.329008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.329021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.332577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.332616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.332629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.337267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.337308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.337322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.341029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.341071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.341086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.345259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.345301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.701 [2024-07-12 15:00:28.349461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.701 [2024-07-12 15:00:28.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.701 [2024-07-12 15:00:28.349533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.353486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.353538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.353554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.357947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.357990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.358005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.361641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.361681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.361695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.365678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.365719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.365733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.369613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.369654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.369668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.373903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.373944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.373959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.377900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.377940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.377954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.381675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.381716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.381730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.386060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.386100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.386115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.390000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.390182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.390321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.959 [2024-07-12 15:00:28.394634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.959 [2024-07-12 15:00:28.394810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.959 [2024-07-12 15:00:28.394938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.398860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.399038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.399177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.403604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.403757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.403776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.406470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.406511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.406537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.411275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.411316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.415534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.415588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.418701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.418741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.422127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.422167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.422182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.425946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.425988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.426003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.431034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.431093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.435760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.435800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.435815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.439804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.439845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.443163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.443204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.443218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.447483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.447532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.447548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.451172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.451213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.455194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.455373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.455395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.459314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.459491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.459638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.960 [2024-07-12 15:00:28.462865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d3f30) 00:20:49.960 [2024-07-12 15:00:28.463049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.960 [2024-07-12 15:00:28.463183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.960 00:20:49.960 Latency(us) 00:20:49.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:49.960 nvme0n1 : 2.00 7540.68 942.58 0.00 0.00 2117.07 700.04 8579.26 00:20:49.960 =================================================================================================================== 00:20:49.960 Total : 7540.68 942.58 0.00 0.00 2117.07 700.04 8579.26 00:20:49.960 0 00:20:49.960 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:49.960 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:49.960 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:49.960 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:49.960 | .driver_specific 00:20:49.960 | .nvme_error 00:20:49.960 | .status_code 00:20:49.960 | .command_transient_transport_error' 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 487 > 0 )) 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93637 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93637 ']' 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93637 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93637 00:20:50.219 killing process with pid 93637 00:20:50.219 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.219 00:20:50.219 Latency(us) 00:20:50.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.219 =================================================================================================================== 00:20:50.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93637' 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93637 00:20:50.219 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93637 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93714 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93714 /var/tmp/bperf.sock 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93714 ']' 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.477 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.478 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.478 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.478 15:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.478 [2024-07-12 15:00:28.978744] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:50.478 [2024-07-12 15:00:28.979092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93714 ] 00:20:50.478 [2024-07-12 15:00:29.118427] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.735 [2024-07-12 15:00:29.211339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.670 15:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.670 15:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:51.670 15:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.670 15:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.670 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.929 nvme0n1 00:20:51.929 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:51.929 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.929 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.187 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:52.187 15:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:52.187 Running I/O for 2 seconds... 00:20:52.187 [2024-07-12 15:00:30.731040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f6458 00:20:52.187 [2024-07-12 15:00:30.732158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.732205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.743327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f4f40 00:20:52.187 [2024-07-12 15:00:30.744448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.757445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e5a90 00:20:52.187 [2024-07-12 15:00:30.759185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.759220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.765987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ed920 00:20:52.187 [2024-07-12 15:00:30.766785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.766820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.778152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e8088 00:20:52.187 [2024-07-12 15:00:30.778946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.778982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.792226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8e88 00:20:52.187 [2024-07-12 15:00:30.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.803608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e84c0 00:20:52.187 [2024-07-12 15:00:30.804707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.817905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4140 00:20:52.187 [2024-07-12 15:00:30.819852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.819889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.826376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190feb58 00:20:52.187 [2024-07-12 15:00:30.827197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.187 [2024-07-12 15:00:30.827231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:52.187 [2024-07-12 15:00:30.840135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f5be8 00:20:52.445 [2024-07-12 15:00:30.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.841192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.851525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8e88 00:20:52.445 [2024-07-12 15:00:30.852340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.852378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.862937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eff18 00:20:52.445 [2024-07-12 15:00:30.863633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.863668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.877745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f20d8 00:20:52.445 [2024-07-12 15:00:30.879696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.879731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.886217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4578 00:20:52.445 [2024-07-12 15:00:30.887205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.887240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.898410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fd208 00:20:52.445 [2024-07-12 15:00:30.899392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.899428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.909823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e6fa8 00:20:52.445 [2024-07-12 15:00:30.910671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.910706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.923932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f7100 00:20:52.445 [2024-07-12 15:00:30.924991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.934739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190df118 00:20:52.445 [2024-07-12 15:00:30.935913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.935947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.949138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f7100 00:20:52.445 [2024-07-12 15:00:30.950974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.951014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.957604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f6890 00:20:52.445 [2024-07-12 15:00:30.958470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.958504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.969781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fef90 00:20:52.445 [2024-07-12 15:00:30.970657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.970692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.983305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e73e0 00:20:52.445 [2024-07-12 15:00:30.984683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.984718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:30.994468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f7538 00:20:52.445 [2024-07-12 15:00:30.995509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:30.995556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:31.006243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e7c50 00:20:52.445 [2024-07-12 15:00:31.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:31.007193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:31.017639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eb328 00:20:52.445 [2024-07-12 15:00:31.018372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:31.018406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:31.033134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fd208 00:20:52.445 [2024-07-12 15:00:31.035035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.445 [2024-07-12 15:00:31.035069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.445 [2024-07-12 15:00:31.041637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eff18 00:20:52.446 [2024-07-12 15:00:31.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.446 [2024-07-12 15:00:31.042586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:52.446 [2024-07-12 15:00:31.056083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8a50 00:20:52.446 [2024-07-12 15:00:31.057539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.446 [2024-07-12 15:00:31.057577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.446 [2024-07-12 15:00:31.067157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f9f68 00:20:52.446 [2024-07-12 15:00:31.068427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.446 [2024-07-12 15:00:31.068460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:52.446 [2024-07-12 15:00:31.078605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de038 00:20:52.446 [2024-07-12 15:00:31.079744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.446 [2024-07-12 15:00:31.079775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.446 [2024-07-12 15:00:31.090056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f9f68 00:20:52.446 [2024-07-12 15:00:31.091004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.446 [2024-07-12 15:00:31.091035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.101840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fe2e8 00:20:52.704 [2024-07-12 15:00:31.102695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.102737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.116099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e0630 00:20:52.704 [2024-07-12 15:00:31.117396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.117436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.129727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eb760 00:20:52.704 [2024-07-12 15:00:31.131631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.131665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.138213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fac10 00:20:52.704 [2024-07-12 15:00:31.138991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.139023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.152506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f96f8 00:20:52.704 [2024-07-12 15:00:31.153912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.153945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.163809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fd640 00:20:52.704 [2024-07-12 15:00:31.165224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.165256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.176389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f3a28 00:20:52.704 [2024-07-12 15:00:31.177943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.177973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.189000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f0788 00:20:52.704 [2024-07-12 15:00:31.190744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.190776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.201581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f1ca0 00:20:52.704 [2024-07-12 15:00:31.203471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.203506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.210174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8618 00:20:52.704 [2024-07-12 15:00:31.210969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.211007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.223594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fdeb0 00:20:52.704 [2024-07-12 15:00:31.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.224908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.233500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e1f80 00:20:52.704 [2024-07-12 15:00:31.234223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.234254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.247876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f6890 00:20:52.704 [2024-07-12 15:00:31.249282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.249316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:52.704 [2024-07-12 15:00:31.259860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fdeb0 00:20:52.704 [2024-07-12 15:00:31.260783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.704 [2024-07-12 15:00:31.260818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.271542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f1ca0 00:20:52.705 [2024-07-12 15:00:31.272787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.272823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.283215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fbcf0 00:20:52.705 [2024-07-12 15:00:31.284485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.284529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.295168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fef90 00:20:52.705 [2024-07-12 15:00:31.295948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.295983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.307734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f2948 00:20:52.705 [2024-07-12 15:00:31.308706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.308740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.319709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f81e0 00:20:52.705 [2024-07-12 15:00:31.320554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.320589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.331067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e6738 00:20:52.705 [2024-07-12 15:00:31.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.331742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.344802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fd208 00:20:52.705 [2024-07-12 15:00:31.346259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.346305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.705 [2024-07-12 15:00:31.355372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fc128 00:20:52.705 [2024-07-12 15:00:31.357261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.705 [2024-07-12 15:00:31.357296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.368270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4140 00:20:52.963 [2024-07-12 15:00:31.369286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.963 [2024-07-12 15:00:31.369321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.379129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e7818 00:20:52.963 [2024-07-12 15:00:31.380366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.963 [2024-07-12 15:00:31.380400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.390968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f6458 00:20:52.963 [2024-07-12 15:00:31.392096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.963 [2024-07-12 15:00:31.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.402936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e01f8 00:20:52.963 [2024-07-12 15:00:31.403588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.963 [2024-07-12 15:00:31.403639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.415425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f2d80 00:20:52.963 [2024-07-12 15:00:31.416255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.963 [2024-07-12 15:00:31.416291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.963 [2024-07-12 15:00:31.427372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e1b48 00:20:52.963 [2024-07-12 15:00:31.428550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.428583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.440964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fbcf0 00:20:52.964 [2024-07-12 15:00:31.442738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.442772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.449471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e1b48 00:20:52.964 [2024-07-12 15:00:31.450326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.450366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.464076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e95a0 00:20:52.964 [2024-07-12 15:00:31.465413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.465447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.477330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e9168 00:20:52.964 [2024-07-12 15:00:31.479130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.488739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fef90 00:20:52.964 [2024-07-12 15:00:31.490360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.490394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.500051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8a50 00:20:52.964 [2024-07-12 15:00:31.501535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.501569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.509642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eaef0 00:20:52.964 [2024-07-12 15:00:31.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.510490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.521416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fb8b8 00:20:52.964 [2024-07-12 15:00:31.522242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.533769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190feb58 00:20:52.964 [2024-07-12 15:00:31.534596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.534632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.548100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f9f68 00:20:52.964 [2024-07-12 15:00:31.549155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.549194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.559224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f5378 00:20:52.964 [2024-07-12 15:00:31.561108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.572630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f1430 00:20:52.964 [2024-07-12 15:00:31.574114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.583645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de8a8 00:20:52.964 [2024-07-12 15:00:31.584970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.585004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.595360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f7da8 00:20:52.964 [2024-07-12 15:00:31.596697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.596731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.964 [2024-07-12 15:00:31.609736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f96f8 00:20:52.964 [2024-07-12 15:00:31.611747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.964 [2024-07-12 15:00:31.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.618313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190edd58 00:20:53.224 [2024-07-12 15:00:31.619333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.619366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.632826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f1868 00:20:53.224 [2024-07-12 15:00:31.634539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.634579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.645394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eee38 00:20:53.224 [2024-07-12 15:00:31.647240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.647273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.653886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eee38 00:20:53.224 [2024-07-12 15:00:31.654780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.654815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.666079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eb760 00:20:53.224 [2024-07-12 15:00:31.666963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.667000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.677524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fc998 00:20:53.224 [2024-07-12 15:00:31.678247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.678281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.691633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f6020 00:20:53.224 [2024-07-12 15:00:31.692576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.692611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.703149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eaef0 00:20:53.224 [2024-07-12 15:00:31.704364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.704400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.714883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fbcf0 00:20:53.224 [2024-07-12 15:00:31.716144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.716182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.729227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ef6a8 00:20:53.224 [2024-07-12 15:00:31.731141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.731177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.737748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4578 00:20:53.224 [2024-07-12 15:00:31.738493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.738535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.752098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de038 00:20:53.224 [2024-07-12 15:00:31.753548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.753588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.763194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e3060 00:20:53.224 [2024-07-12 15:00:31.764448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.764482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.773299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e84c0 00:20:53.224 [2024-07-12 15:00:31.774078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.774111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.787700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e5658 00:20:53.224 [2024-07-12 15:00:31.789111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.789146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.798806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f8618 00:20:53.224 [2024-07-12 15:00:31.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.799954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.810538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eff18 00:20:53.224 [2024-07-12 15:00:31.811469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.811503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.821839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eaef0 00:20:53.224 [2024-07-12 15:00:31.822616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.822652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.836106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eff18 00:20:53.224 [2024-07-12 15:00:31.837742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.837778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.848347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fbcf0 00:20:53.224 [2024-07-12 15:00:31.849972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.850005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.858170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f1868 00:20:53.224 [2024-07-12 15:00:31.858851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.858885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:53.224 [2024-07-12 15:00:31.870382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190feb58 00:20:53.224 [2024-07-12 15:00:31.871379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.224 [2024-07-12 15:00:31.871414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.881823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fe720 00:20:53.482 [2024-07-12 15:00:31.882635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.882672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.893729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e0630 00:20:53.482 [2024-07-12 15:00:31.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.894728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.908087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fb480 00:20:53.482 [2024-07-12 15:00:31.909749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.909784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.919194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e6738 00:20:53.482 [2024-07-12 15:00:31.920546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.920581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.930958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e3060 00:20:53.482 [2024-07-12 15:00:31.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.932345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.945294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f2d80 00:20:53.482 [2024-07-12 15:00:31.947329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.953823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190edd58 00:20:53.482 [2024-07-12 15:00:31.954696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.954728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.967381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f3a28 00:20:53.482 [2024-07-12 15:00:31.968998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.969056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.978825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ed0b0 00:20:53.482 [2024-07-12 15:00:31.980206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.980273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:31.990676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f3a28 00:20:53.482 [2024-07-12 15:00:31.991894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:31.991934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.005474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e99d8 00:20:53.482 [2024-07-12 15:00:32.007366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.007408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.014018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fda78 00:20:53.482 [2024-07-12 15:00:32.014947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.014988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.028458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190feb58 00:20:53.482 [2024-07-12 15:00:32.029908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.029947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.039871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fc128 00:20:53.482 [2024-07-12 15:00:32.041141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.041177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.051004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e0a68 00:20:53.482 [2024-07-12 15:00:32.052143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.052183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.062671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ed920 00:20:53.482 [2024-07-12 15:00:32.063629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.063665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.074000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e12d8 00:20:53.482 [2024-07-12 15:00:32.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.074831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.088253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190df988 00:20:53.482 [2024-07-12 15:00:32.089876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.089911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.099320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190feb58 00:20:53.482 [2024-07-12 15:00:32.100691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.100727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.110939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de470 00:20:53.482 [2024-07-12 15:00:32.112279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.112316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.123391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e8088 00:20:53.482 [2024-07-12 15:00:32.124895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.482 [2024-07-12 15:00:32.124935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:53.482 [2024-07-12 15:00:32.134597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eee38 00:20:53.739 [2024-07-12 15:00:32.135739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.135776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.146308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f9b30 00:20:53.740 [2024-07-12 15:00:32.147357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.157424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f3e60 00:20:53.740 [2024-07-12 15:00:32.158312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.158352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.171552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de470 00:20:53.740 [2024-07-12 15:00:32.172585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.172620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.182869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fb048 00:20:53.740 [2024-07-12 15:00:32.183781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.183817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.193555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e88f8 00:20:53.740 [2024-07-12 15:00:32.194602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.194638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.205641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eff18 00:20:53.740 [2024-07-12 15:00:32.206680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.206716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.217007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fda78 00:20:53.740 [2024-07-12 15:00:32.217912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.217949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.228707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f0ff8 00:20:53.740 [2024-07-12 15:00:32.229596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.229631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.243063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190eaef0 00:20:53.740 [2024-07-12 15:00:32.244651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.244694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.254198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f81e0 00:20:53.740 [2024-07-12 15:00:32.255451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.255491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.265972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ecc78 00:20:53.740 [2024-07-12 15:00:32.267079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.267114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.277362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fe2e8 00:20:53.740 [2024-07-12 15:00:32.278303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.278336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.291763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de038 00:20:53.740 [2024-07-12 15:00:32.293549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.293585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.303985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e95a0 00:20:53.740 [2024-07-12 15:00:32.305766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.305802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.313641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4de8 00:20:53.740 [2024-07-12 15:00:32.314836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.314870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.327363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fef90 00:20:53.740 [2024-07-12 15:00:32.329022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.329059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.340012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190de038 00:20:53.740 [2024-07-12 15:00:32.341801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.341837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.348547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e0630 00:20:53.740 [2024-07-12 15:00:32.349328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.349362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.362888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e95a0 00:20:53.740 [2024-07-12 15:00:32.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.374229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e8088 00:20:53.740 [2024-07-12 15:00:32.375379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.375417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:53.740 [2024-07-12 15:00:32.388894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e6b70 00:20:53.740 [2024-07-12 15:00:32.390897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.740 [2024-07-12 15:00:32.390937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.397734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f31b8 00:20:53.999 [2024-07-12 15:00:32.398774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.398810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.410057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f2510 00:20:53.999 [2024-07-12 15:00:32.411068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.411102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.424397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ed4e8 00:20:53.999 [2024-07-12 15:00:32.426065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.426098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.435582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f5378 00:20:53.999 [2024-07-12 15:00:32.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.436954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.447314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fe720 00:20:53.999 [2024-07-12 15:00:32.448540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.448570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.458350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ebfd0 00:20:53.999 [2024-07-12 15:00:32.459374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.459407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.469783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ef6a8 00:20:53.999 [2024-07-12 15:00:32.470660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.470692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.481438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ee190 00:20:53.999 [2024-07-12 15:00:32.482318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.482348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.493989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ea248 00:20:53.999 [2024-07-12 15:00:32.495068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.495104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.506801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e49b0 00:20:53.999 [2024-07-12 15:00:32.508033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.508068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.519329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e4578 00:20:53.999 [2024-07-12 15:00:32.520721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.520752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.531466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f0bc0 00:20:53.999 [2024-07-12 15:00:32.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.545126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f96f8 00:20:53.999 [2024-07-12 15:00:32.547004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.553727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ea248 00:20:53.999 [2024-07-12 15:00:32.554640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.554669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.568100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e3060 00:20:53.999 [2024-07-12 15:00:32.569682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.569712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.579230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f4f40 00:20:53.999 [2024-07-12 15:00:32.580480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.999 [2024-07-12 15:00:32.580529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:53.999 [2024-07-12 15:00:32.591536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e5220 00:20:53.999 [2024-07-12 15:00:32.592850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.000 [2024-07-12 15:00:32.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:54.000 [2024-07-12 15:00:32.604024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e5a90 00:20:54.000 [2024-07-12 15:00:32.605464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.000 [2024-07-12 15:00:32.605495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:54.000 [2024-07-12 15:00:32.616225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fc560 00:20:54.000 [2024-07-12 15:00:32.617748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.000 [2024-07-12 15:00:32.617785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:54.000 [2024-07-12 15:00:32.627777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190fc128 00:20:54.000 [2024-07-12 15:00:32.629085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.000 [2024-07-12 15:00:32.629115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:54.000 [2024-07-12 15:00:32.639122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e99d8 00:20:54.000 [2024-07-12 15:00:32.640249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.000 [2024-07-12 15:00:32.640279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.652610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ebfd0 00:20:54.273 [2024-07-12 15:00:32.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.654258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.665065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190e99d8 00:20:54.273 [2024-07-12 15:00:32.666850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.666882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.673532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f5378 00:20:54.273 [2024-07-12 15:00:32.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.674366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.687924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f7100 00:20:54.273 [2024-07-12 15:00:32.689449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.689488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.699121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190ed0b0 00:20:54.273 [2024-07-12 15:00:32.700285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:54.273 [2024-07-12 15:00:32.710726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7b90) with pdu=0x2000190f0350 00:20:54.273 [2024-07-12 15:00:32.711915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.273 [2024-07-12 15:00:32.711946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:54.273 00:20:54.273 Latency(us) 00:20:54.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.273 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:54.273 nvme0n1 : 2.00 21236.32 82.95 0.00 0.00 6017.98 2457.60 16205.27 00:20:54.273 =================================================================================================================== 00:20:54.273 Total : 21236.32 82.95 0.00 0.00 6017.98 2457.60 16205.27 00:20:54.273 0 00:20:54.273 15:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:54.273 15:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:54.273 15:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:54.273 | .driver_specific 00:20:54.273 | .nvme_error 00:20:54.273 | .status_code 00:20:54.273 | .command_transient_transport_error' 00:20:54.273 15:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93714 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93714 ']' 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93714 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93714 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.569 killing process with pid 93714 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93714' 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93714 00:20:54.569 Received shutdown signal, test time was about 2.000000 seconds 00:20:54.569 00:20:54.569 Latency(us) 00:20:54.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.569 =================================================================================================================== 00:20:54.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93714 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:54.569 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93799 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93799 /var/tmp/bperf.sock 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93799 ']' 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.825 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:54.825 Zero copy mechanism will not be used. 00:20:54.825 [2024-07-12 15:00:33.280133] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:54.825 [2024-07-12 15:00:33.280231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93799 ] 00:20:54.825 [2024-07-12 15:00:33.418187] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.082 [2024-07-12 15:00:33.486749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.082 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.082 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:55.082 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:55.082 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.343 15:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.600 nvme0n1 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:55.600 15:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:55.858 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:55.858 Zero copy mechanism will not be used. 00:20:55.858 Running I/O for 2 seconds... 00:20:55.858 [2024-07-12 15:00:34.307798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.308124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.308155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.313169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.313461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.318480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.318789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.318820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.323932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.324257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.324288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.329229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.329551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.329575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.334542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.334833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.334863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.339865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.340157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.340186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.345135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.345428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.345458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.350399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.350718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.350746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.355691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.355985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.356012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.360962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.361260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.366251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.366575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.858 [2024-07-12 15:00:34.366615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.858 [2024-07-12 15:00:34.371810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.858 [2024-07-12 15:00:34.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.372145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.377148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.377440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.377463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.382436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.382768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.387737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.388028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.388056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.393028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.393320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.393348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.398329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.398636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.398665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.403627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.403927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.403955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.408916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.409207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.409236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.414172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.414487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.414528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.419479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.419790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.419818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.424784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.425090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.430143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.430438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.430471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.435437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.435749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.435780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.440801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.441126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.446186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.446495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.446536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.451433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.451740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.451770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.456805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.457101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.457129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.462112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.462418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.462446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.467372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.467676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.467704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.472765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.473082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.477852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.478132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.478164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.483045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.483321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.483348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.488036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.488323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.488350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.493166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.493444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.498211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.498509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.498566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.503353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.503653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.503684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.859 [2024-07-12 15:00:34.508493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:55.859 [2024-07-12 15:00:34.508778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.859 [2024-07-12 15:00:34.508808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.513674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.125 [2024-07-12 15:00:34.513955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.125 [2024-07-12 15:00:34.513985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.518810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.125 [2024-07-12 15:00:34.519089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.125 [2024-07-12 15:00:34.519119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.523913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.125 [2024-07-12 15:00:34.524186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.125 [2024-07-12 15:00:34.524216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.529035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.125 [2024-07-12 15:00:34.529305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.125 [2024-07-12 15:00:34.529334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.534134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.125 [2024-07-12 15:00:34.534411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.125 [2024-07-12 15:00:34.534433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.125 [2024-07-12 15:00:34.539225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.539496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.539536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.544302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.544589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.544618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.549353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.549638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.549661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.554455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.554739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.559386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.559751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.559795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.564410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.564764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.564816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.569443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.569783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.574068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.574358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.574402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.578759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.579079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.579124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.583290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.583680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.583723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.587952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.588357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.592608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.592972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.593015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.597096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.597174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.597208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.601646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.601897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.601939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.606265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.606356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.606389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.610885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.611016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.615496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.615745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.615779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.620052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.624686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.624915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.624949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.629352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.629499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.634014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.634096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.638617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.638829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.638863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.643189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.643272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.126 [2024-07-12 15:00:34.643297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.126 [2024-07-12 15:00:34.647823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.126 [2024-07-12 15:00:34.647895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.647918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.127 [2024-07-12 15:00:34.652464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.127 [2024-07-12 15:00:34.652576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.652599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.127 [2024-07-12 15:00:34.657128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.127 [2024-07-12 15:00:34.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.657221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.127 [2024-07-12 15:00:34.661768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.127 [2024-07-12 15:00:34.661849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.661872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.127 [2024-07-12 15:00:34.666430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.127 [2024-07-12 15:00:34.666525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.127 [2024-07-12 15:00:34.671138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.127 [2024-07-12 15:00:34.671233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.127 [2024-07-12 15:00:34.671255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.675824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.675907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.675930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.680545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.680633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.680656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.685222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.685295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.685318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.689828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.689912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.694861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.694933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.699571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.699657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.699680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.704315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.704405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.128 [2024-07-12 15:00:34.704428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.128 [2024-07-12 15:00:34.709038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.128 [2024-07-12 15:00:34.709121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.709143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.713670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.713744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.713766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.718291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.718403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.722935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.723025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.723048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.727700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.727796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.727819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.732444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.732565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.737094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.737166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.129 [2024-07-12 15:00:34.737188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.129 [2024-07-12 15:00:34.741754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.129 [2024-07-12 15:00:34.741846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.741869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.746499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.746627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.751158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.751249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.755784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.755868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.755890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.760470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.760575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.760599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.765177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.765273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.765294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.769813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.769908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.769930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.130 [2024-07-12 15:00:34.774534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.130 [2024-07-12 15:00:34.774610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.130 [2024-07-12 15:00:34.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.779297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.779373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.779396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.784042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.784112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.788738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.788808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.788830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.793411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.793490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.798177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.798272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.802835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.802917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.802940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.807577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.807671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.807694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.812207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.812290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.812313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.816961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.817047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.817070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.821633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.821715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.821737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.826377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.826448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.826469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.831052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.831122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.831144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.835771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.835848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.835871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.840499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.840599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.840621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.845141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.845226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.845249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.849758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.849831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.849853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.854385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.854460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.854482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.859064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.859153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.859175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.863776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.863869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.863891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.868441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.868550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.868573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.873140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.873237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.873260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.877852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.877926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.877949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.882581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.882672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.882695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.887236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.887331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.398 [2024-07-12 15:00:34.891984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.398 [2024-07-12 15:00:34.892076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.398 [2024-07-12 15:00:34.892099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.896694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.896788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.896811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.901446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.901534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.901557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.906730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.906825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.906847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.912537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.912629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.912652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.917704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.917779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.922806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.922901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.922923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.927607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.927690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.927712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.932681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.932776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.932798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.937635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.937708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.937731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.942449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.942573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.947188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.947275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.947302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.951921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.952015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.952037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.956701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.956800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.956823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.961368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.961451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.966080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.966152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.966174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.970759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.970833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.970856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.975340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.975412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.975434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.980103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.980196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.980218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.984759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.984851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.984873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.989455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.989542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.989564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.994163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.994256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.994278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:34.998887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:34.998987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:34.999012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.003583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.003666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.003692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.008348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.008462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.008487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.013126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.013223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.013246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.017837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.017929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.017952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.022488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.022588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.022611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.027220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.027305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.027326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.031859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.031948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.031970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.036626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.036711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.036733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.041288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.041360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.041382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.045990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.046063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.046087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.399 [2024-07-12 15:00:35.050658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.399 [2024-07-12 15:00:35.050730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.399 [2024-07-12 15:00:35.050753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.055311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.055393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.055415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.059990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.060081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.060103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.064699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.064787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.064810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.069438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.069587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.069609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.075619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.075697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.075720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.080730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.080817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.080839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.085563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.085636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.085659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.090362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.090442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.090463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.095211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.095295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.100077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.100164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.104886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.104978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.105000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.109641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.109734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.114269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.114354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.114377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.118974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.119076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.119102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.123756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.123859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.128512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.128615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.128637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.133142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.133223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.133245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.137877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.137973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.137996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.142527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.142618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.142640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.147210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.147304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.147327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.151932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.152006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.152028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.156633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.156705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.156727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.161333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.161419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.161441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.165973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.166046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.166068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.170744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.170817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.175396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.175470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.175492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.180106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.180201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.180223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.184745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.184828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.184849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.189404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.189499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.189536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.194030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.194121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.194143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.198738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.198818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.659 [2024-07-12 15:00:35.198841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.659 [2024-07-12 15:00:35.203384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.659 [2024-07-12 15:00:35.203477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.208129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.208210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.208232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.212791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.212882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.212904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.217442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.217529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.222161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.222257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.222279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.226839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.226923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.226947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.231594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.231696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.231720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.236279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.236398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.241053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.241151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.245726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.245805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.245830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.250423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.250537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.250563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.255719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.255841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.260650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.260757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.260783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.266820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.266928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.266951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.272565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.272648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.272670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.277394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.277488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.277510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.282275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.282371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.282394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.287274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.287362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.287385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.292345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.292439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.292461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.297198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.297295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.302259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.302353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.302375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.660 [2024-07-12 15:00:35.307177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.660 [2024-07-12 15:00:35.307261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.660 [2024-07-12 15:00:35.307284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.920 [2024-07-12 15:00:35.311976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.920 [2024-07-12 15:00:35.312059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.920 [2024-07-12 15:00:35.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.920 [2024-07-12 15:00:35.316893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.920 [2024-07-12 15:00:35.316980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.920 [2024-07-12 15:00:35.317002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.920 [2024-07-12 15:00:35.321740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.920 [2024-07-12 15:00:35.321833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.920 [2024-07-12 15:00:35.321854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.920 [2024-07-12 15:00:35.326484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.920 [2024-07-12 15:00:35.326597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.920 [2024-07-12 15:00:35.326619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.920 [2024-07-12 15:00:35.331376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.920 [2024-07-12 15:00:35.331468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.920 [2024-07-12 15:00:35.331489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.336341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.336429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.336452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.341110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.341182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.341204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.346014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.346090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.346113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.350960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.351041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.351063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.355718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.355798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.355821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.360510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.360608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.360631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.365226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.365318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.365339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.369970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.370063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.370085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.374695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.374778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.374799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.379402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.379494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.379528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.384053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.384134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.384155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.388827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.388921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.388944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.393568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.393661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.393692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.398232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.398315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.398337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.402942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.403035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.403056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.407673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.407743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.412941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.413013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.413036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.417560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.417651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.417672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.422301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.422372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.422394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.426977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.427065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.427090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.431730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.431819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.431846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.436481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.436586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.436612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.441206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.441333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.445876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.445968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.445991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.450564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.450654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.450676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.455150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.455229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.455251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.459914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.459986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.460009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.921 [2024-07-12 15:00:35.464642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.921 [2024-07-12 15:00:35.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.921 [2024-07-12 15:00:35.464734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.469207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.469280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.469301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.473846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.473950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.478413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.478502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.478539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.483138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.483228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.483249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.487834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.487906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.492541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.492623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.492645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.497182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.497252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.497274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.501852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.501924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.501946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.506490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.506588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.506609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.511158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.511229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.511251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.515777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.515849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.515871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.520469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.520564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.520586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.525103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.525191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.525212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.529741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.529830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.529851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.534418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.534489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.534511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.539041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.539113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.539136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.543747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.543848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.548471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.548561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.548584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.553138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.553210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.553232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.557820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.557896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.557918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.562429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.562502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.562539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.567199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.567293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.567316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.922 [2024-07-12 15:00:35.571852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:56.922 [2024-07-12 15:00:35.571943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.922 [2024-07-12 15:00:35.571965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.576531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.576615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.576637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.581204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.581285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.581306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.585853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.585929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.590670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.590765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.590787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.597116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.597234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.601970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.602068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.602090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.606943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.607037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.607059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.611823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.611909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.611931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.616543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.616618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.621196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.621270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.621292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.626032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.626104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.630652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.630728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.630749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.635356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.635436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.635457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.641208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.641286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.641308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.647096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.647194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.647225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.652664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.652751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.652773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.657510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.657623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.657645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.662279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.662363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.662390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.667674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.667756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.667779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.672460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.678357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.678432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.678454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.683268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.683342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.683364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.688015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.688088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.688110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.692718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.692811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.692833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.697693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.697775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.697797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.702396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.702489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.702511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.707135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.707217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.707239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.711894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.183 [2024-07-12 15:00:35.711973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.183 [2024-07-12 15:00:35.711996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.183 [2024-07-12 15:00:35.716629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.716712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.716734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.721294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.721367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.721389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.725982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.726098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.730650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.730722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.730745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.735382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.735487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.735511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.740098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.740212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.744855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.744962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.744990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.749508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.749623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.749652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.754219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.754313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.754340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.758943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.759034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.759057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.763644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.763728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.763751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.768289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.768361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.768382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.773026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.773112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.773134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.777765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.777858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.777879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.782442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.782535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.787183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.787276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.787298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.791869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.791956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.791977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.796530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.796623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.796644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.801142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.801215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.801237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.805875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.805947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.805969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.810536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.810619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.810641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.815157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.815229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.815251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.819847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.819919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.819941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.824490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.824578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.824600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.829189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.829262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.184 [2024-07-12 15:00:35.833858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.184 [2024-07-12 15:00:35.833953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.184 [2024-07-12 15:00:35.833977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.838479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.838587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.838611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.843115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.843194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.843217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.847847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.852588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.852661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.852684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.857239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.857323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.857346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.861892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.866600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.866673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.866695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.871258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.871351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.871373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.876022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.876116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.876138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.880750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.880850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.880872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.885450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.885560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.885583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.890124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.890234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.894842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.894935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.894958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.899596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.899680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.899705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.904406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.904499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.909217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.909305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.909327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.914007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.444 [2024-07-12 15:00:35.914089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.444 [2024-07-12 15:00:35.914122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.444 [2024-07-12 15:00:35.918704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.918794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.918819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.923507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.923612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.923635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.928215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.928316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.928338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.932987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.933068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.937749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.937829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.937852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.942432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.942534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.942556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.947137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.947230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.947252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.951805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.951915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.956430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.956534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.956557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.961151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.961233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.961255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.965806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.965900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.965923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.970453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.970582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.975144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.975239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.979836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.979910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.979933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.984553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.984648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.984670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.989212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.989302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.989324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.993886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.993980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.994002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:35.998530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:35.998622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:35.998644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.003233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.003316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.003337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.007927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.007998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.008020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.012584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.012655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.012678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.017204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.017280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.017303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.021871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.021951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.021974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.026550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.026636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.026657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.031258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.031349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.031371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.035929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.035999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.036021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.040610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.040702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.040725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.045334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.045409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.045431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.050038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.445 [2024-07-12 15:00:36.050111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.445 [2024-07-12 15:00:36.050133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.445 [2024-07-12 15:00:36.054667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.054741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.059324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.059397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.059419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.063958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.064031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.068727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.068800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.068822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.073320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.073416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.077972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.078057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.082688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.082814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.082842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.087627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.087743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.087771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.446 [2024-07-12 15:00:36.093705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.446 [2024-07-12 15:00:36.093813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.446 [2024-07-12 15:00:36.093843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.099035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.099169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.099197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.103903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.104018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.108851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.108946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.108968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.113716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.113790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.113812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.118461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.118578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.118600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.123179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.123271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.123293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.127953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.128027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.128049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.132677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.132761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.132783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.137386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.137462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.137484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.142224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.142342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.146897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.146980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.147002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.151631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.151705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.156297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.156388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.156410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.161046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.161139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.161161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.165804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.165917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.170588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.170662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.170683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.175377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.175454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.175476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.180138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.180219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.180251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.184865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.184979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.189681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.189763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.189785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.194407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.194486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.194507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.199206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.199289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.199311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.203977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.204072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.204094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.208682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.208773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.208795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.213398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.213470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.213492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.218138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.218229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.218250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.222898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.222983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.223004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.227670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.227765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.232462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.232569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.232591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.237236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.237330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.237352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.241962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.242044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.242066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.246762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.246849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.246871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.251559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.251659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.256363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.256460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.256488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.261185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.261292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.261323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.265991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.707 [2024-07-12 15:00:36.266097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.707 [2024-07-12 15:00:36.266124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.707 [2024-07-12 15:00:36.270719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.270804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.275412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.275507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.280176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.280277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.280299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.284891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.284984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.285007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.289623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.289708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.289729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.294329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.294421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.294443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.708 [2024-07-12 15:00:36.299055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fa7d00) with pdu=0x2000190fef90 00:20:57.708 [2024-07-12 15:00:36.299153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.708 [2024-07-12 15:00:36.299175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.708 00:20:57.708 Latency(us) 00:20:57.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:57.708 nvme0n1 : 2.00 6416.10 802.01 0.00 0.00 2487.52 1936.29 6732.33 00:20:57.708 =================================================================================================================== 00:20:57.708 Total : 6416.10 802.01 0.00 0.00 2487.52 1936.29 6732.33 00:20:57.708 0 00:20:57.708 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:57.708 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:57.708 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:57.708 | .driver_specific 00:20:57.708 | .nvme_error 00:20:57.708 | .status_code 00:20:57.708 | .command_transient_transport_error' 00:20:57.708 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 414 > 0 )) 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93799 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93799 ']' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93799 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93799 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.273 killing process with pid 93799 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93799' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93799 00:20:58.273 Received shutdown signal, test time was about 2.000000 seconds 00:20:58.273 00:20:58.273 Latency(us) 00:20:58.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.273 =================================================================================================================== 00:20:58.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93799 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93533 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93533 ']' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93533 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93533 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:58.273 killing process with pid 93533 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93533' 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93533 00:20:58.273 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93533 00:20:58.532 00:20:58.532 real 0m15.828s 00:20:58.532 user 0m30.546s 00:20:58.532 sys 0m4.376s 00:20:58.532 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.532 15:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:58.532 ************************************ 00:20:58.532 END TEST nvmf_digest_error 00:20:58.532 ************************************ 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.532 rmmod nvme_tcp 00:20:58.532 rmmod nvme_fabrics 00:20:58.532 rmmod nvme_keyring 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93533 ']' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93533 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93533 ']' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93533 00:20:58.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93533) - No such process 00:20:58.532 Process with pid 93533 is not found 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93533 is not found' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:58.532 00:20:58.532 real 0m34.676s 00:20:58.532 user 1m6.110s 00:20:58.532 sys 0m9.013s 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.532 15:00:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:58.532 ************************************ 00:20:58.532 END TEST nvmf_digest 00:20:58.532 ************************************ 00:20:58.790 15:00:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:58.790 15:00:37 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:20:58.790 15:00:37 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:20:58.790 15:00:37 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:58.790 15:00:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:58.790 15:00:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.790 15:00:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:58.790 ************************************ 00:20:58.790 START TEST nvmf_mdns_discovery 00:20:58.790 ************************************ 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:58.790 * Looking for test storage... 00:20:58.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:58.790 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:58.791 Cannot find device "nvmf_tgt_br" 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.791 Cannot find device "nvmf_tgt_br2" 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:58.791 Cannot find device "nvmf_tgt_br" 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:58.791 Cannot find device "nvmf_tgt_br2" 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:58.791 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.048 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:59.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:59.049 00:20:59.049 --- 10.0.0.2 ping statistics --- 00:20:59.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.049 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:59.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:20:59.049 00:20:59.049 --- 10.0.0.3 ping statistics --- 00:20:59.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.049 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:59.049 00:20:59.049 --- 10.0.0.1 ping statistics --- 00:20:59.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.049 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94078 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94078 00:20:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94078 ']' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.049 15:00:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.307 [2024-07-12 15:00:37.750560] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:20:59.308 [2024-07-12 15:00:37.750718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.308 [2024-07-12 15:00:37.891780] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.308 [2024-07-12 15:00:37.948954] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.308 [2024-07-12 15:00:37.949014] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.308 [2024-07-12 15:00:37.949027] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.308 [2024-07-12 15:00:37.949036] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.308 [2024-07-12 15:00:37.949043] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.308 [2024-07-12 15:00:37.949074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.241 [2024-07-12 15:00:38.883717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.241 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:00.242 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.242 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.242 [2024-07-12 15:00:38.891849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.500 null0 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.500 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.500 null1 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.501 null2 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.501 null3 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94129 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94129 /tmp/host.sock 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94129 ']' 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.501 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.501 15:00:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.501 [2024-07-12 15:00:38.993896] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:21:00.501 [2024-07-12 15:00:38.994000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94129 ] 00:21:00.501 [2024-07-12 15:00:39.134046] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.758 [2024-07-12 15:00:39.206998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.692 15:00:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.692 15:00:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:01.692 15:00:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:21:01.692 15:00:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:21:01.692 15:00:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:21:01.692 15:00:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94159 00:21:01.692 15:00:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:21:01.692 15:00:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:21:01.692 15:00:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:21:01.692 Process 980 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:21:01.692 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:21:01.692 Successfully dropped root privileges. 00:21:01.692 avahi-daemon 0.8 starting up. 00:21:01.692 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:21:01.692 Successfully called chroot(). 00:21:01.692 Successfully dropped remaining capabilities. 00:21:01.692 No service file found in /etc/avahi/services. 00:21:02.627 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:21:02.627 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:21:02.627 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:21:02.627 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:21:02.627 Network interface enumeration completed. 00:21:02.627 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:21:02.627 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:21:02.627 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:21:02.627 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:21:02.627 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4097902296. 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:02.627 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 [2024-07-12 15:00:41.406974] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 [2024-07-12 15:00:41.488535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 [2024-07-12 15:00:41.528557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.886 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 [2024-07-12 15:00:41.536434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.144 15:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:21:03.711 [2024-07-12 15:00:42.306974] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:04.277 [2024-07-12 15:00:42.907009] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:04.277 [2024-07-12 15:00:42.907059] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:04.277 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.277 cookie is 0 00:21:04.277 is_local: 1 00:21:04.277 our_own: 0 00:21:04.277 wide_area: 0 00:21:04.277 multicast: 1 00:21:04.277 cached: 1 00:21:04.536 [2024-07-12 15:00:43.007009] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:04.536 [2024-07-12 15:00:43.007071] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:04.536 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.536 cookie is 0 00:21:04.536 is_local: 1 00:21:04.536 our_own: 0 00:21:04.536 wide_area: 0 00:21:04.536 multicast: 1 00:21:04.536 cached: 1 00:21:04.536 [2024-07-12 15:00:43.007091] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:04.536 [2024-07-12 15:00:43.107002] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:04.536 [2024-07-12 15:00:43.107048] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:04.536 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.536 cookie is 0 00:21:04.536 is_local: 1 00:21:04.536 our_own: 0 00:21:04.536 wide_area: 0 00:21:04.536 multicast: 1 00:21:04.536 cached: 1 00:21:04.796 [2024-07-12 15:00:43.207002] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:04.796 [2024-07-12 15:00:43.207052] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:04.796 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.796 cookie is 0 00:21:04.796 is_local: 1 00:21:04.796 our_own: 0 00:21:04.796 wide_area: 0 00:21:04.796 multicast: 1 00:21:04.796 cached: 1 00:21:04.796 [2024-07-12 15:00:43.207067] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:05.361 [2024-07-12 15:00:43.920233] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:05.361 [2024-07-12 15:00:43.920306] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:05.361 [2024-07-12 15:00:43.920343] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:05.361 [2024-07-12 15:00:44.006407] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:21:05.619 [2024-07-12 15:00:44.063921] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:05.619 [2024-07-12 15:00:44.063995] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:05.619 [2024-07-12 15:00:44.120051] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:05.619 [2024-07-12 15:00:44.120094] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:05.619 [2024-07-12 15:00:44.120115] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:05.619 [2024-07-12 15:00:44.206182] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:21:05.619 [2024-07-12 15:00:44.262459] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:05.619 [2024-07-12 15:00:44.262503] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.404 15:00:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:09.351 15:00:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.608 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.609 [2024-07-12 15:00:48.075513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.609 [2024-07-12 15:00:48.076105] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:09.609 [2024-07-12 15:00:48.076144] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:09.609 [2024-07-12 15:00:48.076183] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:09.609 [2024-07-12 15:00:48.076198] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.609 [2024-07-12 15:00:48.083434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:09.609 [2024-07-12 15:00:48.084125] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:09.609 [2024-07-12 15:00:48.084189] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.609 15:00:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:21:09.609 [2024-07-12 15:00:48.214210] bdev_nvme.c:6915:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:21:09.609 [2024-07-12 15:00:48.214656] bdev_nvme.c:6915:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:21:09.894 [2024-07-12 15:00:48.273990] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:09.894 [2024-07-12 15:00:48.274040] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:09.894 [2024-07-12 15:00:48.274049] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:09.894 [2024-07-12 15:00:48.274072] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:09.894 [2024-07-12 15:00:48.274697] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:09.894 [2024-07-12 15:00:48.274719] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:09.894 [2024-07-12 15:00:48.274726] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:09.894 [2024-07-12 15:00:48.274743] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:09.894 [2024-07-12 15:00:48.319328] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:09.894 [2024-07-12 15:00:48.319368] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:09.894 [2024-07-12 15:00:48.320302] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:21:09.894 [2024-07-12 15:00:48.320321] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:10.478 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.996 [2024-07-12 15:00:49.401006] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:10.996 [2024-07-12 15:00:49.401045] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:10.996 [2024-07-12 15:00:49.401082] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:10.996 [2024-07-12 15:00:49.401096] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.996 [2024-07-12 15:00:49.403181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.403221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.403234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.403254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.403273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.403282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.403292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.996 [2024-07-12 15:00:49.409005] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:10.996 [2024-07-12 15:00:49.409063] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:10.996 [2024-07-12 15:00:49.411180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.411213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.411226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.411235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.411245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.411254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.411264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.996 [2024-07-12 15:00:49.411273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.996 [2024-07-12 15:00:49.411282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.996 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.997 15:00:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:21:10.997 [2024-07-12 15:00:49.413141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.421150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.423163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.997 [2024-07-12 15:00:49.423279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.423303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.997 [2024-07-12 15:00:49.423315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.423332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.423347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.423355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.423366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.997 [2024-07-12 15:00:49.423382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.431161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.997 [2024-07-12 15:00:49.431252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.431274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.997 [2024-07-12 15:00:49.431285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.431301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.431315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.431324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.431333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.997 [2024-07-12 15:00:49.431349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.433219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.997 [2024-07-12 15:00:49.433299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.433320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.997 [2024-07-12 15:00:49.433331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.433346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.433361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.433369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.433378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.997 [2024-07-12 15:00:49.433392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.441216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.997 [2024-07-12 15:00:49.441300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.441321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.997 [2024-07-12 15:00:49.441331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.441347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.441361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.441370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.441379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.997 [2024-07-12 15:00:49.441393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.443267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.997 [2024-07-12 15:00:49.443347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.443367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.997 [2024-07-12 15:00:49.443383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.443399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.443413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.443422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.443431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.997 [2024-07-12 15:00:49.443445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.451278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.997 [2024-07-12 15:00:49.451456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.451481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.997 [2024-07-12 15:00:49.451492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.451512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.451542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.451551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.451561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.997 [2024-07-12 15:00:49.451578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.453317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.997 [2024-07-12 15:00:49.453395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.453415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.997 [2024-07-12 15:00:49.453426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.453441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.453455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.453463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.453472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.997 [2024-07-12 15:00:49.453487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.997 [2024-07-12 15:00:49.461369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.997 [2024-07-12 15:00:49.461460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.997 [2024-07-12 15:00:49.461481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.997 [2024-07-12 15:00:49.461492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.997 [2024-07-12 15:00:49.461509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.997 [2024-07-12 15:00:49.461537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.997 [2024-07-12 15:00:49.461547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.997 [2024-07-12 15:00:49.461556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.997 [2024-07-12 15:00:49.461572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.463366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.998 [2024-07-12 15:00:49.463446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.463466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.998 [2024-07-12 15:00:49.463477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.463492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.463506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.463527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.463537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.998 [2024-07-12 15:00:49.463552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.471425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.998 [2024-07-12 15:00:49.471507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.471538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.998 [2024-07-12 15:00:49.471550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.471566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.471580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.471588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.471597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.998 [2024-07-12 15:00:49.471612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.473416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.998 [2024-07-12 15:00:49.473494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.473525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.998 [2024-07-12 15:00:49.473537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.473553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.473567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.473576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.473590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.998 [2024-07-12 15:00:49.473605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.481477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.998 [2024-07-12 15:00:49.481566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.481587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.998 [2024-07-12 15:00:49.481597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.481613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.481627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.481635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.481644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.998 [2024-07-12 15:00:49.481659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.483464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.998 [2024-07-12 15:00:49.483551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.483571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.998 [2024-07-12 15:00:49.483581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.483597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.483611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.483620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.483629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.998 [2024-07-12 15:00:49.483643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.491533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.998 [2024-07-12 15:00:49.491616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.491636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.998 [2024-07-12 15:00:49.491647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.491663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.491676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.491685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.491694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.998 [2024-07-12 15:00:49.491708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.493513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.998 [2024-07-12 15:00:49.493599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.493619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.998 [2024-07-12 15:00:49.493630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.493646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.493660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.493668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.493677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.998 [2024-07-12 15:00:49.493691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.501588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.998 [2024-07-12 15:00:49.501680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.501701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.998 [2024-07-12 15:00:49.501712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.501727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.501741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.501750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.501759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.998 [2024-07-12 15:00:49.501773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.503571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.998 [2024-07-12 15:00:49.503649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.503669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.998 [2024-07-12 15:00:49.503680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.503695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.503709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.503717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.503726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.998 [2024-07-12 15:00:49.503741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.998 [2024-07-12 15:00:49.511647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.998 [2024-07-12 15:00:49.511737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.998 [2024-07-12 15:00:49.511758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.998 [2024-07-12 15:00:49.511769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.998 [2024-07-12 15:00:49.511785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.998 [2024-07-12 15:00:49.511798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.998 [2024-07-12 15:00:49.511807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.998 [2024-07-12 15:00:49.511815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.998 [2024-07-12 15:00:49.511830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.513621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.999 [2024-07-12 15:00:49.513701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.999 [2024-07-12 15:00:49.513722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.999 [2024-07-12 15:00:49.513732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.999 [2024-07-12 15:00:49.513748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.999 [2024-07-12 15:00:49.513761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.999 [2024-07-12 15:00:49.513770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.999 [2024-07-12 15:00:49.513779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.999 [2024-07-12 15:00:49.513793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.521714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.999 [2024-07-12 15:00:49.521876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.999 [2024-07-12 15:00:49.521900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.999 [2024-07-12 15:00:49.521912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.999 [2024-07-12 15:00:49.521931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.999 [2024-07-12 15:00:49.521946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.999 [2024-07-12 15:00:49.521955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.999 [2024-07-12 15:00:49.521965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.999 [2024-07-12 15:00:49.521980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.523672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.999 [2024-07-12 15:00:49.523750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.999 [2024-07-12 15:00:49.523770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.999 [2024-07-12 15:00:49.523780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.999 [2024-07-12 15:00:49.523796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.999 [2024-07-12 15:00:49.523809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.999 [2024-07-12 15:00:49.523818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.999 [2024-07-12 15:00:49.523827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.999 [2024-07-12 15:00:49.523841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.531801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:21:10.999 [2024-07-12 15:00:49.531888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.999 [2024-07-12 15:00:49.531908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65300 with addr=10.0.0.3, port=4420 00:21:10.999 [2024-07-12 15:00:49.531919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65300 is same with the state(5) to be set 00:21:10.999 [2024-07-12 15:00:49.531935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65300 (9): Bad file descriptor 00:21:10.999 [2024-07-12 15:00:49.531949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:21:10.999 [2024-07-12 15:00:49.531958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:21:10.999 [2024-07-12 15:00:49.531967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:21:10.999 [2024-07-12 15:00:49.531982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.533721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:10.999 [2024-07-12 15:00:49.533798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.999 [2024-07-12 15:00:49.533819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e879a0 with addr=10.0.0.2, port=4420 00:21:10.999 [2024-07-12 15:00:49.533829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e879a0 is same with the state(5) to be set 00:21:10.999 [2024-07-12 15:00:49.533845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e879a0 (9): Bad file descriptor 00:21:10.999 [2024-07-12 15:00:49.533858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:10.999 [2024-07-12 15:00:49.533867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:10.999 [2024-07-12 15:00:49.533876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:10.999 [2024-07-12 15:00:49.533890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.999 [2024-07-12 15:00:49.540130] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:21:10.999 [2024-07-12 15:00:49.540161] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:10.999 [2024-07-12 15:00:49.540197] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:10.999 [2024-07-12 15:00:49.540233] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:10.999 [2024-07-12 15:00:49.540259] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:10.999 [2024-07-12 15:00:49.540275] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.999 [2024-07-12 15:00:49.626230] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:10.999 [2024-07-12 15:00:49.626302] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:11.949 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.207 15:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:21:12.207 [2024-07-12 15:00:50.707009] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.141 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.399 [2024-07-12 15:00:51.904130] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:13.399 2024/07/12 15:00:51 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:13.399 request: 00:21:13.399 { 00:21:13.399 "method": "bdev_nvme_start_mdns_discovery", 00:21:13.399 "params": { 00:21:13.399 "name": "mdns", 00:21:13.399 "svcname": "_nvme-disc._http", 00:21:13.399 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:13.399 } 00:21:13.399 } 00:21:13.399 Got JSON-RPC error response 00:21:13.399 GoRPCClient: error on JSON-RPC call 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.399 15:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:21:13.965 [2024-07-12 15:00:52.492775] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:13.965 [2024-07-12 15:00:52.592774] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:14.223 [2024-07-12 15:00:52.692786] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:14.223 [2024-07-12 15:00:52.692836] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:14.223 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:14.223 cookie is 0 00:21:14.223 is_local: 1 00:21:14.223 our_own: 0 00:21:14.223 wide_area: 0 00:21:14.223 multicast: 1 00:21:14.223 cached: 1 00:21:14.223 [2024-07-12 15:00:52.792787] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:14.223 [2024-07-12 15:00:52.792837] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:14.223 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:14.223 cookie is 0 00:21:14.223 is_local: 1 00:21:14.223 our_own: 0 00:21:14.223 wide_area: 0 00:21:14.223 multicast: 1 00:21:14.223 cached: 1 00:21:14.223 [2024-07-12 15:00:52.792853] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:14.481 [2024-07-12 15:00:52.893769] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:14.481 [2024-07-12 15:00:52.893821] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:14.481 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:14.481 cookie is 0 00:21:14.481 is_local: 1 00:21:14.481 our_own: 0 00:21:14.481 wide_area: 0 00:21:14.481 multicast: 1 00:21:14.481 cached: 1 00:21:14.481 [2024-07-12 15:00:52.992789] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:14.481 [2024-07-12 15:00:52.992839] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:14.481 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:14.481 cookie is 0 00:21:14.481 is_local: 1 00:21:14.481 our_own: 0 00:21:14.481 wide_area: 0 00:21:14.481 multicast: 1 00:21:14.481 cached: 1 00:21:14.481 [2024-07-12 15:00:52.992856] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:15.411 [2024-07-12 15:00:53.702507] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:15.411 [2024-07-12 15:00:53.702576] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:15.411 [2024-07-12 15:00:53.702611] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:15.411 [2024-07-12 15:00:53.790673] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:21:15.411 [2024-07-12 15:00:53.857344] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:15.411 [2024-07-12 15:00:53.857396] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:15.411 [2024-07-12 15:00:53.902303] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:15.411 [2024-07-12 15:00:53.902353] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:15.411 [2024-07-12 15:00:53.902374] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:15.411 [2024-07-12 15:00:53.988466] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:21:15.412 [2024-07-12 15:00:54.049149] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:15.412 [2024-07-12 15:00:54.049201] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:18.690 15:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:18.690 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 [2024-07-12 15:00:57.098405] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:18.691 2024/07/12 15:00:57 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:18.691 request: 00:21:18.691 { 00:21:18.691 "method": "bdev_nvme_start_mdns_discovery", 00:21:18.691 "params": { 00:21:18.691 "name": "cdc", 00:21:18.691 "svcname": "_nvme-disc._tcp", 00:21:18.691 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:18.691 } 00:21:18.691 } 00:21:18.691 Got JSON-RPC error response 00:21:18.691 GoRPCClient: error on JSON-RPC call 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94129 00:21:18.691 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94129 00:21:18.691 [2024-07-12 15:00:57.314807] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94159 00:21:18.949 Got SIGTERM, quitting. 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:21:18.949 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:21:18.949 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:21:18.949 avahi-daemon 0.8 exiting. 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.949 rmmod nvme_tcp 00:21:18.949 rmmod nvme_fabrics 00:21:18.949 rmmod nvme_keyring 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94078 ']' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94078 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94078 ']' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94078 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94078 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.949 killing process with pid 94078 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94078' 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94078 00:21:18.949 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94078 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:19.207 00:21:19.207 real 0m20.495s 00:21:19.207 user 0m40.353s 00:21:19.207 sys 0m1.939s 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.207 ************************************ 00:21:19.207 15:00:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.207 END TEST nvmf_mdns_discovery 00:21:19.207 ************************************ 00:21:19.207 15:00:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:19.207 15:00:57 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:21:19.207 15:00:57 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:19.207 15:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:19.207 15:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.207 15:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:19.207 ************************************ 00:21:19.207 START TEST nvmf_host_multipath 00:21:19.207 ************************************ 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:19.207 * Looking for test storage... 00:21:19.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.207 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:19.465 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:19.466 Cannot find device "nvmf_tgt_br" 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.466 Cannot find device "nvmf_tgt_br2" 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:19.466 Cannot find device "nvmf_tgt_br" 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:19.466 Cannot find device "nvmf_tgt_br2" 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:19.466 15:00:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.466 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:19.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:19.725 00:21:19.725 --- 10.0.0.2 ping statistics --- 00:21:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.725 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:19.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:19.725 00:21:19.725 --- 10.0.0.3 ping statistics --- 00:21:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.725 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:19.725 00:21:19.725 --- 10.0.0.1 ping statistics --- 00:21:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.725 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94712 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94712 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94712 ']' 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.726 15:00:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:19.726 [2024-07-12 15:00:58.298844] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:21:19.726 [2024-07-12 15:00:58.298956] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.984 [2024-07-12 15:00:58.441571] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:19.984 [2024-07-12 15:00:58.528050] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.984 [2024-07-12 15:00:58.528116] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.984 [2024-07-12 15:00:58.528133] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.984 [2024-07-12 15:00:58.528145] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.984 [2024-07-12 15:00:58.528155] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.984 [2024-07-12 15:00:58.528495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.984 [2024-07-12 15:00:58.528894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94712 00:21:20.918 15:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:20.918 [2024-07-12 15:00:59.564381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.176 15:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:21.435 Malloc0 00:21:21.435 15:00:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:21.697 15:01:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.960 15:01:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.225 [2024-07-12 15:01:00.738151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.225 15:01:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:22.812 [2024-07-12 15:01:01.190367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94821 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94821 /var/tmp/bdevperf.sock 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94821 ']' 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.812 15:01:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:23.776 15:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.776 15:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:23.776 15:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:24.034 15:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:24.599 Nvme0n1 00:21:24.599 15:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:24.857 Nvme0n1 00:21:24.857 15:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.857 15:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:26.229 15:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:26.229 15:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:26.229 15:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:26.487 15:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:26.487 15:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94913 00:21:26.487 15:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:26.487 15:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.074 Attaching 4 probes... 00:21:33.074 @path[10.0.0.2, 4421]: 17046 00:21:33.074 @path[10.0.0.2, 4421]: 17323 00:21:33.074 @path[10.0.0.2, 4421]: 17345 00:21:33.074 @path[10.0.0.2, 4421]: 16219 00:21:33.074 @path[10.0.0.2, 4421]: 16032 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94913 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:33.074 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:33.394 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:33.394 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95044 00:21:33.394 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.394 15:01:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:39.984 15:01:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:39.984 15:01:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.984 Attaching 4 probes... 00:21:39.984 @path[10.0.0.2, 4420]: 16820 00:21:39.984 @path[10.0.0.2, 4420]: 16927 00:21:39.984 @path[10.0.0.2, 4420]: 15091 00:21:39.984 @path[10.0.0.2, 4420]: 16986 00:21:39.984 @path[10.0.0.2, 4420]: 17057 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95044 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:39.984 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:40.241 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:40.241 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95179 00:21:40.241 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:40.241 15:01:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:46.797 15:01:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:46.797 15:01:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.797 Attaching 4 probes... 00:21:46.797 @path[10.0.0.2, 4421]: 14026 00:21:46.797 @path[10.0.0.2, 4421]: 16923 00:21:46.797 @path[10.0.0.2, 4421]: 17107 00:21:46.797 @path[10.0.0.2, 4421]: 17176 00:21:46.797 @path[10.0.0.2, 4421]: 17168 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95179 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:46.797 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:47.364 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:47.364 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95305 00:21:47.364 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:47.364 15:01:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:53.912 15:01:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:53.912 15:01:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.912 Attaching 4 probes... 00:21:53.912 00:21:53.912 00:21:53.912 00:21:53.912 00:21:53.912 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95305 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:53.912 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:54.170 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:54.170 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95440 00:21:54.170 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:54.170 15:01:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:00.717 15:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:00.717 15:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:00.717 Attaching 4 probes... 00:22:00.717 @path[10.0.0.2, 4421]: 15421 00:22:00.717 @path[10.0.0.2, 4421]: 16309 00:22:00.717 @path[10.0.0.2, 4421]: 16864 00:22:00.717 @path[10.0.0.2, 4421]: 16654 00:22:00.717 @path[10.0.0.2, 4421]: 15020 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95440 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:00.717 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.717 [2024-07-12 15:01:39.293384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.717 [2024-07-12 15:01:39.293694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 [2024-07-12 15:01:39.293922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe569f0 is same with the state(5) to be set 00:22:00.718 15:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:02.087 15:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:02.087 15:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95572 00:22:02.088 15:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:02.088 15:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.645 Attaching 4 probes... 00:22:08.645 @path[10.0.0.2, 4420]: 16147 00:22:08.645 @path[10.0.0.2, 4420]: 16113 00:22:08.645 @path[10.0.0.2, 4420]: 16074 00:22:08.645 @path[10.0.0.2, 4420]: 16471 00:22:08.645 @path[10.0.0.2, 4420]: 16054 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95572 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.645 15:01:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.645 [2024-07-12 15:01:47.009167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:08.645 15:01:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:08.901 15:01:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:15.619 15:01:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:15.619 15:01:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95773 00:22:15.619 15:01:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:15.619 15:01:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94712 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:20.879 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:20.879 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.136 Attaching 4 probes... 00:22:21.136 @path[10.0.0.2, 4421]: 16161 00:22:21.136 @path[10.0.0.2, 4421]: 16520 00:22:21.136 @path[10.0.0.2, 4421]: 16520 00:22:21.136 @path[10.0.0.2, 4421]: 16275 00:22:21.136 @path[10.0.0.2, 4421]: 14926 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95773 00:22:21.136 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.406 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94821 00:22:21.406 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94821 ']' 00:22:21.406 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94821 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94821 00:22:21.407 killing process with pid 94821 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94821' 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94821 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94821 00:22:21.407 Connection closed with partial response: 00:22:21.407 00:22:21.407 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94821 00:22:21.407 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:21.407 [2024-07-12 15:01:01.293314] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:22:21.407 [2024-07-12 15:01:01.293529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94821 ] 00:22:21.407 [2024-07-12 15:01:01.436513] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.407 [2024-07-12 15:01:01.495968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.407 Running I/O for 90 seconds... 00:22:21.407 [2024-07-12 15:01:11.920713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.920836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.920877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.920913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.920948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.920983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.920997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.921968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.921982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.923304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.923335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.923363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.923380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.923402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.407 [2024-07-12 15:01:11.923417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:21.407 [2024-07-12 15:01:11.923450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.923977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.923992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.408 [2024-07-12 15:01:11.924239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.408 [2024-07-12 15:01:11.924289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.408 [2024-07-12 15:01:11.924324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.408 [2024-07-12 15:01:11.924365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.408 [2024-07-12 15:01:11.924404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.924968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.924989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.408 [2024-07-12 15:01:11.925005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:21.408 [2024-07-12 15:01:11.925026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:11.925634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.925981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.925996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.926017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.926031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.926055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.926070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.926091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.926106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.926141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.926166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.926187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:11.928225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:11.928267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.562819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:18.562884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.562940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:18.562961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.562984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:18.562999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.409 [2024-07-12 15:01:18.563034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:18.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:18.563105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:18.563140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:18.563175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.409 [2024-07-12 15:01:18.563210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.409 [2024-07-12 15:01:18.563231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.410 [2024-07-12 15:01:18.563900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.563937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.563972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.563993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.564007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.564029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.564044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.565974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.410 [2024-07-12 15:01:18.565989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:21.410 [2024-07-12 15:01:18.566014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.566978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.566992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.411 [2024-07-12 15:01:18.567580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.411 [2024-07-12 15:01:18.567596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:18.567973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.567999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.568962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:18.568991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.412 [2024-07-12 15:01:18.569007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:21.412 [2024-07-12 15:01:25.724479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.412 [2024-07-12 15:01:25.724493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.725706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.725870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.725992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.726198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.726316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.726436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.726542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.726662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.726751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.726840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.727032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.727206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.413 [2024-07-12 15:01:25.727300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.727400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.727486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.727701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.727794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.728053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.728161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.728272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.728365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.728465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.728586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.728702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.728775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.728862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.728947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.729959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:21.413 [2024-07-12 15:01:25.730551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.413 [2024-07-12 15:01:25.730585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.730960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.730976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.731969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.731984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.732023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.732063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.732102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.732147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.414 [2024-07-12 15:01:25.732199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.414 [2024-07-12 15:01:25.732240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.414 [2024-07-12 15:01:25.732299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.414 [2024-07-12 15:01:25.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:21.414 [2024-07-12 15:01:25.732362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.414 [2024-07-12 15:01:25.732377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.732989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:25.733507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.415 [2024-07-12 15:01:25.733543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.293967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.293992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.294015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.415 [2024-07-12 15:01:39.294029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:21.415 [2024-07-12 15:01:39.294051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.294370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.294791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.294805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.416 [2024-07-12 15:01:39.295660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.416 [2024-07-12 15:01:39.295896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.416 [2024-07-12 15:01:39.295911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.295924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.295939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.295953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.295968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.295981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.295996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.296880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.296980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.296994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.297022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.297052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.297080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.417 [2024-07-12 15:01:39.297109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.297166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.417 [2024-07-12 15:01:39.297202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.417 [2024-07-12 15:01:39.297218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.297232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.297260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.297290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.297319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.297347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.297977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.297993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.298008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.298037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.298065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.418 [2024-07-12 15:01:39.298265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.298757] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17da990 was disconnected and freed. reset controller. 00:22:21.418 [2024-07-12 15:01:39.299969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:21.418 [2024-07-12 15:01:39.300055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.418 [2024-07-12 15:01:39.300092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.418 [2024-07-12 15:01:39.300128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dbd10 (9): Bad file descriptor 00:22:21.418 [2024-07-12 15:01:39.300266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.418 [2024-07-12 15:01:39.300297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dbd10 with addr=10.0.0.2, port=4421 00:22:21.418 [2024-07-12 15:01:39.300314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dbd10 is same with the state(5) to be set 00:22:21.418 [2024-07-12 15:01:39.300338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dbd10 (9): Bad file descriptor 00:22:21.418 [2024-07-12 15:01:39.300360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:21.418 [2024-07-12 15:01:39.300374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:21.418 [2024-07-12 15:01:39.300388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:21.418 [2024-07-12 15:01:39.300412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.418 [2024-07-12 15:01:39.300429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:21.418 [2024-07-12 15:01:49.390434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:21.418 Received shutdown signal, test time was about 56.207865 seconds 00:22:21.418 00:22:21.418 Latency(us) 00:22:21.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.419 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.419 Verification LBA range: start 0x0 length 0x4000 00:22:21.419 Nvme0n1 : 56.21 7038.71 27.49 0.00 0.00 18154.06 800.58 7046430.72 00:22:21.419 =================================================================================================================== 00:22:21.419 Total : 7038.71 27.49 0.00 0.00 18154.06 800.58 7046430.72 00:22:21.419 15:01:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.751 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:21.752 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.752 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.752 rmmod nvme_tcp 00:22:21.752 rmmod nvme_fabrics 00:22:22.010 rmmod nvme_keyring 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94712 ']' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94712 ']' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.010 killing process with pid 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94712' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94712 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.010 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.268 15:02:00 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:22.268 00:22:22.268 real 1m2.907s 00:22:22.268 user 2m59.543s 00:22:22.268 sys 0m13.919s 00:22:22.268 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.268 ************************************ 00:22:22.268 15:02:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:22.268 END TEST nvmf_host_multipath 00:22:22.268 ************************************ 00:22:22.268 15:02:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:22.268 15:02:00 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:22.268 15:02:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.268 15:02:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.268 15:02:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.268 ************************************ 00:22:22.268 START TEST nvmf_timeout 00:22:22.268 ************************************ 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:22.268 * Looking for test storage... 00:22:22.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.268 15:02:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:22.269 Cannot find device "nvmf_tgt_br" 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.269 Cannot find device "nvmf_tgt_br2" 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:22.269 Cannot find device "nvmf_tgt_br" 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:22.269 Cannot find device "nvmf_tgt_br2" 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:22.269 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.526 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.527 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.527 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.527 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.527 15:02:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:22.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:22:22.527 00:22:22.527 --- 10.0.0.2 ping statistics --- 00:22:22.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.527 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:22.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:22.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:22.527 00:22:22.527 --- 10.0.0.3 ping statistics --- 00:22:22.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.527 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:22.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:22.527 00:22:22.527 --- 10.0.0.1 ping statistics --- 00:22:22.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.527 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96092 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96092 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96092 ']' 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.527 15:02:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.784 [2024-07-12 15:02:01.257424] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:22:22.784 [2024-07-12 15:02:01.257531] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.784 [2024-07-12 15:02:01.393711] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:23.040 [2024-07-12 15:02:01.466195] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.040 [2024-07-12 15:02:01.466252] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.040 [2024-07-12 15:02:01.466266] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.040 [2024-07-12 15:02:01.466276] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.040 [2024-07-12 15:02:01.466289] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.040 [2024-07-12 15:02:01.467235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.040 [2024-07-12 15:02:01.467246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:23.970 [2024-07-12 15:02:02.595684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.970 15:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:24.536 Malloc0 00:22:24.536 15:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.793 15:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.050 15:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.308 [2024-07-12 15:02:03.755319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96189 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96189 /var/tmp/bdevperf.sock 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96189 ']' 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.308 15:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:25.308 [2024-07-12 15:02:03.827328] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:22:25.308 [2024-07-12 15:02:03.827418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96189 ] 00:22:25.566 [2024-07-12 15:02:03.964529] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.566 [2024-07-12 15:02:04.032767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.499 15:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.499 15:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:26.499 15:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:26.499 15:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:27.064 NVMe0n1 00:22:27.064 15:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96231 00:22:27.064 15:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.064 15:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:27.064 Running I/O for 10 seconds... 00:22:27.998 15:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.258 [2024-07-12 15:02:06.686134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.258 [2024-07-12 15:02:06.686714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.686984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.686996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.258 [2024-07-12 15:02:06.687728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.258 [2024-07-12 15:02:06.687738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.687973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.687989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.688924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.688981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.688993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.259 [2024-07-12 15:02:06.689341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.259 [2024-07-12 15:02:06.689508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459190 is same with the state(5) to be set 00:22:28.259 [2024-07-12 15:02:06.689553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:28.259 [2024-07-12 15:02:06.689566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:28.259 [2024-07-12 15:02:06.689581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81216 len:8 PRP1 0x0 PRP2 0x0 00:22:28.259 [2024-07-12 15:02:06.689597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689642] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1459190 was disconnected and freed. reset controller. 00:22:28.259 [2024-07-12 15:02:06.689738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.259 [2024-07-12 15:02:06.689754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.259 [2024-07-12 15:02:06.689766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.259 [2024-07-12 15:02:06.689775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.260 [2024-07-12 15:02:06.689785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.260 [2024-07-12 15:02:06.689794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.260 [2024-07-12 15:02:06.689804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.260 [2024-07-12 15:02:06.689813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.260 [2024-07-12 15:02:06.689822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14083e0 is same with the state(5) to be set 00:22:28.260 [2024-07-12 15:02:06.690100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.260 [2024-07-12 15:02:06.690138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14083e0 (9): Bad file descriptor 00:22:28.260 [2024-07-12 15:02:06.690257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.260 [2024-07-12 15:02:06.690302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14083e0 with addr=10.0.0.2, port=4420 00:22:28.260 [2024-07-12 15:02:06.690315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14083e0 is same with the state(5) to be set 00:22:28.260 [2024-07-12 15:02:06.690335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14083e0 (9): Bad file descriptor 00:22:28.260 [2024-07-12 15:02:06.690351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.260 [2024-07-12 15:02:06.690361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:28.260 [2024-07-12 15:02:06.690381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.260 [2024-07-12 15:02:06.690402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.260 [2024-07-12 15:02:06.690416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.260 15:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:30.157 [2024-07-12 15:02:08.690683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.157 [2024-07-12 15:02:08.690752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14083e0 with addr=10.0.0.2, port=4420 00:22:30.158 [2024-07-12 15:02:08.690769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14083e0 is same with the state(5) to be set 00:22:30.158 [2024-07-12 15:02:08.690795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14083e0 (9): Bad file descriptor 00:22:30.158 [2024-07-12 15:02:08.690815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.158 [2024-07-12 15:02:08.690824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.158 [2024-07-12 15:02:08.690836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.158 [2024-07-12 15:02:08.690864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.158 [2024-07-12 15:02:08.690876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.158 15:02:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:30.158 15:02:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.158 15:02:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:30.415 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:30.415 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:30.415 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:30.415 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:30.673 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:30.673 15:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:32.047 [2024-07-12 15:02:10.691124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.047 [2024-07-12 15:02:10.691190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14083e0 with addr=10.0.0.2, port=4420 00:22:32.047 [2024-07-12 15:02:10.691207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14083e0 is same with the state(5) to be set 00:22:32.047 [2024-07-12 15:02:10.691249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14083e0 (9): Bad file descriptor 00:22:32.047 [2024-07-12 15:02:10.691281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.047 [2024-07-12 15:02:10.691293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:32.047 [2024-07-12 15:02:10.691305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.047 [2024-07-12 15:02:10.691331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:32.047 [2024-07-12 15:02:10.691343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.580 [2024-07-12 15:02:12.691467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.580 [2024-07-12 15:02:12.691541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.580 [2024-07-12 15:02:12.691554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.580 [2024-07-12 15:02:12.691565] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:34.580 [2024-07-12 15:02:12.691594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:35.145 00:22:35.145 Latency(us) 00:22:35.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.145 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.145 Verification LBA range: start 0x0 length 0x4000 00:22:35.145 NVMe0n1 : 8.13 1235.93 4.83 15.74 0.00 102119.41 2263.97 7015926.69 00:22:35.145 =================================================================================================================== 00:22:35.145 Total : 1235.93 4.83 15.74 0.00 102119.41 2263.97 7015926.69 00:22:35.145 0 00:22:35.710 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:35.710 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.710 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:35.968 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:35.968 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:35.968 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:35.968 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96231 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96189 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96189 ']' 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96189 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96189 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:36.229 killing process with pid 96189 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96189' 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96189 00:22:36.229 Received shutdown signal, test time was about 9.254001 seconds 00:22:36.229 00:22:36.229 Latency(us) 00:22:36.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.229 =================================================================================================================== 00:22:36.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.229 15:02:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96189 00:22:36.492 15:02:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.750 [2024-07-12 15:02:15.179926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96389 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96389 /var/tmp/bdevperf.sock 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96389 ']' 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.750 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:36.750 [2024-07-12 15:02:15.257111] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:22:36.750 [2024-07-12 15:02:15.257213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96389 ] 00:22:36.750 [2024-07-12 15:02:15.392858] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.008 [2024-07-12 15:02:15.460781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.008 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.008 15:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:37.008 15:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:37.265 15:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:37.523 NVMe0n1 00:22:37.781 15:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96423 00:22:37.781 15:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:37.781 15:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:37.781 Running I/O for 10 seconds... 00:22:38.717 15:02:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.989 [2024-07-12 15:02:17.465823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.465997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.989 [2024-07-12 15:02:17.466073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.466314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b30 is same with the state(5) to be set 00:22:38.990 [2024-07-12 15:02:17.468112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.990 [2024-07-12 15:02:17.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.990 [2024-07-12 15:02:17.468748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.990 [2024-07-12 15:02:17.468758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.468980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.468990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.991 [2024-07-12 15:02:17.469667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.991 [2024-07-12 15:02:17.469678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.992 [2024-07-12 15:02:17.469688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.992 [2024-07-12 15:02:17.469708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.992 [2024-07-12 15:02:17.469729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.469989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.469999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.992 [2024-07-12 15:02:17.470576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.992 [2024-07-12 15:02:17.470587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.993 [2024-07-12 15:02:17.470738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.470972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.993 [2024-07-12 15:02:17.470979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.993 [2024-07-12 15:02:17.470987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:22:38.993 [2024-07-12 15:02:17.470996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.471048] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x599190 was disconnected and freed. reset controller. 00:22:38.993 [2024-07-12 15:02:17.471135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.993 [2024-07-12 15:02:17.471151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.471162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.993 [2024-07-12 15:02:17.471172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.471184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.993 [2024-07-12 15:02:17.471194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.471203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.993 [2024-07-12 15:02:17.471213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.993 [2024-07-12 15:02:17.471222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:38.993 [2024-07-12 15:02:17.471456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.993 [2024-07-12 15:02:17.471487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:38.993 [2024-07-12 15:02:17.471590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.993 [2024-07-12 15:02:17.471613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:38.993 [2024-07-12 15:02:17.471624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:38.993 [2024-07-12 15:02:17.471642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:38.993 [2024-07-12 15:02:17.471658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.993 [2024-07-12 15:02:17.471668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.993 [2024-07-12 15:02:17.471682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.993 [2024-07-12 15:02:17.471701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.993 [2024-07-12 15:02:17.471712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.993 15:02:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:39.930 [2024-07-12 15:02:18.471884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.930 [2024-07-12 15:02:18.471969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:39.930 [2024-07-12 15:02:18.471987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:39.930 [2024-07-12 15:02:18.472016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:39.930 [2024-07-12 15:02:18.472036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:39.930 [2024-07-12 15:02:18.472047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:39.930 [2024-07-12 15:02:18.472058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.930 [2024-07-12 15:02:18.472086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.930 [2024-07-12 15:02:18.472099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.930 15:02:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.189 [2024-07-12 15:02:18.782430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.189 15:02:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96423 00:22:41.125 [2024-07-12 15:02:19.487778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:49.241 00:22:49.241 Latency(us) 00:22:49.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.241 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:49.241 Verification LBA range: start 0x0 length 0x4000 00:22:49.241 NVMe0n1 : 10.01 6195.85 24.20 0.00 0.00 20615.18 1727.77 3019898.88 00:22:49.241 =================================================================================================================== 00:22:49.241 Total : 6195.85 24.20 0.00 0.00 20615.18 1727.77 3019898.88 00:22:49.241 0 00:22:49.241 15:02:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96537 00:22:49.241 15:02:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.241 15:02:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:49.241 Running I/O for 10 seconds... 00:22:49.241 15:02:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.241 [2024-07-12 15:02:27.621007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.621485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.621639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.621728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.621811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.621899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.621972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.622907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.622980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.623059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.241 [2024-07-12 15:02:27.623156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.241 [2024-07-12 15:02:27.623227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.623324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.623428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.623503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.623609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.623705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.623791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.623877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.623939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.624116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.242 [2024-07-12 15:02:27.624307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.624489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.624650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.624818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.624882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.624962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.625039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.625120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.625182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.625255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.625317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.625402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.625532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.625683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.625834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.625984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.626116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.626276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.626433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.626611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.626739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.626858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.626950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.627889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.627975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.628846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.628924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.242 [2024-07-12 15:02:27.629913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.242 [2024-07-12 15:02:27.629997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.630173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.630464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.630618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.630854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.630987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.631160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.631353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.631862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.631953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.632920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.632995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.633165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.633348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.633533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.633715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.633961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.634902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.634973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.635102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.635279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.635462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.635689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.635871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.635958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.636121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.636329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.636463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.636694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.636859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.636936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.637007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.637086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.243 [2024-07-12 15:02:27.637164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.243 [2024-07-12 15:02:27.637278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.637388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.637465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.637598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.637678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.637786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.637864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.637950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.638329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.638507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.638754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.638926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.638988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.639931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.639994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.640985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.640997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.244 [2024-07-12 15:02:27.641116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6470 is same with the state(5) to be set 00:22:49.244 [2024-07-12 15:02:27.641145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.244 [2024-07-12 15:02:27.641154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.244 [2024-07-12 15:02:27.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:22:49.244 [2024-07-12 15:02:27.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641228] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5c6470 was disconnected and freed. reset controller. 00:22:49.244 [2024-07-12 15:02:27.641378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.244 [2024-07-12 15:02:27.641396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.244 [2024-07-12 15:02:27.641417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.244 [2024-07-12 15:02:27.641427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.245 [2024-07-12 15:02:27.641436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.245 [2024-07-12 15:02:27.641446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.245 [2024-07-12 15:02:27.641455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.245 [2024-07-12 15:02:27.641465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:49.245 [2024-07-12 15:02:27.641732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.245 [2024-07-12 15:02:27.641763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:49.245 [2024-07-12 15:02:27.641872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.245 [2024-07-12 15:02:27.641896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:49.245 [2024-07-12 15:02:27.641907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:49.245 [2024-07-12 15:02:27.641925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:49.245 [2024-07-12 15:02:27.641942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.245 [2024-07-12 15:02:27.641951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.245 [2024-07-12 15:02:27.641963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.245 [2024-07-12 15:02:27.641985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.245 [2024-07-12 15:02:27.641997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.245 15:02:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:50.180 [2024-07-12 15:02:28.642152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.180 [2024-07-12 15:02:28.642226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:50.180 [2024-07-12 15:02:28.642243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:50.180 [2024-07-12 15:02:28.642272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:50.180 [2024-07-12 15:02:28.642291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.180 [2024-07-12 15:02:28.642301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.180 [2024-07-12 15:02:28.642313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.180 [2024-07-12 15:02:28.642340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.180 [2024-07-12 15:02:28.642352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.113 [2024-07-12 15:02:29.642525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.113 [2024-07-12 15:02:29.642601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:51.113 [2024-07-12 15:02:29.642619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:51.113 [2024-07-12 15:02:29.642647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:51.113 [2024-07-12 15:02:29.642667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.113 [2024-07-12 15:02:29.642677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.113 [2024-07-12 15:02:29.642689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.113 [2024-07-12 15:02:29.642715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.113 [2024-07-12 15:02:29.642728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.045 [2024-07-12 15:02:30.646815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.045 [2024-07-12 15:02:30.646891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5483e0 with addr=10.0.0.2, port=4420 00:22:52.045 [2024-07-12 15:02:30.646910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5483e0 is same with the state(5) to be set 00:22:52.045 [2024-07-12 15:02:30.647176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5483e0 (9): Bad file descriptor 00:22:52.045 [2024-07-12 15:02:30.647442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.045 [2024-07-12 15:02:30.647458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.045 [2024-07-12 15:02:30.647470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.045 15:02:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.045 [2024-07-12 15:02:30.651574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.045 [2024-07-12 15:02:30.651618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.304 [2024-07-12 15:02:30.955000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.562 15:02:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96537 00:22:53.129 [2024-07-12 15:02:31.691640] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.387 00:22:58.387 Latency(us) 00:22:58.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.387 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.387 Verification LBA range: start 0x0 length 0x4000 00:22:58.387 NVMe0n1 : 10.01 5253.26 20.52 3520.09 0.00 14559.57 647.91 3035150.89 00:22:58.387 =================================================================================================================== 00:22:58.387 Total : 5253.26 20.52 3520.09 0.00 14559.57 0.00 3035150.89 00:22:58.387 0 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96389 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96389 ']' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96389 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96389 00:22:58.387 killing process with pid 96389 00:22:58.387 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.387 00:22:58.387 Latency(us) 00:22:58.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.387 =================================================================================================================== 00:22:58.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96389' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96389 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96389 00:22:58.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96659 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96659 /var/tmp/bdevperf.sock 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96659 ']' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.387 15:02:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:58.387 [2024-07-12 15:02:36.766368] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:22:58.387 [2024-07-12 15:02:36.767082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96659 ] 00:22:58.387 [2024-07-12 15:02:36.903266] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.387 [2024-07-12 15:02:36.962434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.644 15:02:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.644 15:02:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:58.644 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96678 00:22:58.644 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96659 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:58.644 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:58.901 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:59.466 NVMe0n1 00:22:59.466 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96726 00:22:59.466 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.466 15:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:59.466 Running I/O for 10 seconds... 00:23:00.399 15:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.683 [2024-07-12 15:02:39.188571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.189982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.190995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.191997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.683 [2024-07-12 15:02:39.192968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.193975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.194981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.195973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.196913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753260 is same with the state(5) to be set 00:23:00.684 [2024-07-12 15:02:39.197155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-07-12 15:02:39.197653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-07-12 15:02:39.197662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.197988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.197997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-07-12 15:02:39.198638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-07-12 15:02:39.198650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.198999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-07-12 15:02:39.199646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-07-12 15:02:39.199663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.687 [2024-07-12 15:02:39.199925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.199936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd190 is same with the state(5) to be set 00:23:00.687 [2024-07-12 15:02:39.199950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.687 [2024-07-12 15:02:39.199958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.687 [2024-07-12 15:02:39.199966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47856 len:8 PRP1 0x0 PRP2 0x0 00:23:00.687 [2024-07-12 15:02:39.199975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.200024] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1efd190 was disconnected and freed. reset controller. 00:23:00.687 [2024-07-12 15:02:39.200132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.687 [2024-07-12 15:02:39.200148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.200159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.687 [2024-07-12 15:02:39.200177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.200194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.687 [2024-07-12 15:02:39.200203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.200213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.687 [2024-07-12 15:02:39.200222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.687 [2024-07-12 15:02:39.200231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac3e0 is same with the state(5) to be set 00:23:00.687 [2024-07-12 15:02:39.200508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.687 [2024-07-12 15:02:39.200549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac3e0 (9): Bad file descriptor 00:23:00.687 [2024-07-12 15:02:39.200661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.687 [2024-07-12 15:02:39.200684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eac3e0 with addr=10.0.0.2, port=4420 00:23:00.687 [2024-07-12 15:02:39.200695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac3e0 is same with the state(5) to be set 00:23:00.687 [2024-07-12 15:02:39.200713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac3e0 (9): Bad file descriptor 00:23:00.687 [2024-07-12 15:02:39.200728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.687 [2024-07-12 15:02:39.200737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.687 [2024-07-12 15:02:39.200747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.687 [2024-07-12 15:02:39.200768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.687 [2024-07-12 15:02:39.200778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.687 15:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96726 00:23:02.631 [2024-07-12 15:02:41.201135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.631 [2024-07-12 15:02:41.201211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eac3e0 with addr=10.0.0.2, port=4420 00:23:02.631 [2024-07-12 15:02:41.201229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac3e0 is same with the state(5) to be set 00:23:02.631 [2024-07-12 15:02:41.201256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac3e0 (9): Bad file descriptor 00:23:02.631 [2024-07-12 15:02:41.201275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.631 [2024-07-12 15:02:41.201286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.631 [2024-07-12 15:02:41.201296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.631 [2024-07-12 15:02:41.201323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.631 [2024-07-12 15:02:41.201335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.158 [2024-07-12 15:02:43.201684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.158 [2024-07-12 15:02:43.201759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eac3e0 with addr=10.0.0.2, port=4420 00:23:05.158 [2024-07-12 15:02:43.201778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac3e0 is same with the state(5) to be set 00:23:05.158 [2024-07-12 15:02:43.201806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac3e0 (9): Bad file descriptor 00:23:05.158 [2024-07-12 15:02:43.201838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:05.158 [2024-07-12 15:02:43.201850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:05.158 [2024-07-12 15:02:43.201861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.158 [2024-07-12 15:02:43.201889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.158 [2024-07-12 15:02:43.201900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.059 [2024-07-12 15:02:45.202054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.059 [2024-07-12 15:02:45.202123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.059 [2024-07-12 15:02:45.202144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:07.059 [2024-07-12 15:02:45.202159] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:07.059 [2024-07-12 15:02:45.202200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.627 00:23:07.627 Latency(us) 00:23:07.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.627 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:07.627 NVMe0n1 : 8.16 2468.46 9.64 15.68 0.00 51520.16 2561.86 7046430.72 00:23:07.627 =================================================================================================================== 00:23:07.627 Total : 2468.46 9.64 15.68 0.00 51520.16 2561.86 7046430.72 00:23:07.627 0 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.627 Attaching 5 probes... 00:23:07.627 1504.685299: reset bdev controller NVMe0 00:23:07.627 1504.775366: reconnect bdev controller NVMe0 00:23:07.627 3505.168076: reconnect delay bdev controller NVMe0 00:23:07.627 3505.195379: reconnect bdev controller NVMe0 00:23:07.627 5505.724321: reconnect delay bdev controller NVMe0 00:23:07.627 5505.747973: reconnect bdev controller NVMe0 00:23:07.627 7506.201237: reconnect delay bdev controller NVMe0 00:23:07.627 7506.230321: reconnect bdev controller NVMe0 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96678 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96659 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96659 ']' 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96659 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96659 00:23:07.627 killing process with pid 96659 00:23:07.627 Received shutdown signal, test time was about 8.204638 seconds 00:23:07.627 00:23:07.627 Latency(us) 00:23:07.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.627 =================================================================================================================== 00:23:07.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96659' 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96659 00:23:07.627 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96659 00:23:07.980 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.239 rmmod nvme_tcp 00:23:08.239 rmmod nvme_fabrics 00:23:08.239 rmmod nvme_keyring 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96092 ']' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96092 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96092 ']' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96092 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96092 00:23:08.239 killing process with pid 96092 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96092' 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96092 00:23:08.239 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96092 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.499 15:02:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.499 15:02:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:08.499 ************************************ 00:23:08.499 END TEST nvmf_timeout 00:23:08.499 ************************************ 00:23:08.499 00:23:08.499 real 0m46.278s 00:23:08.499 user 2m17.030s 00:23:08.499 sys 0m4.608s 00:23:08.499 15:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.499 15:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:08.499 15:02:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:08.499 15:02:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:23:08.499 15:02:47 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:23:08.499 15:02:47 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.499 15:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.499 15:02:47 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:23:08.499 ************************************ 00:23:08.499 END TEST nvmf_tcp 00:23:08.499 ************************************ 00:23:08.499 00:23:08.499 real 15m47.426s 00:23:08.499 user 42m15.860s 00:23:08.499 sys 3m17.325s 00:23:08.499 15:02:47 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.499 15:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.499 15:02:47 -- common/autotest_common.sh@1142 -- # return 0 00:23:08.499 15:02:47 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:23:08.499 15:02:47 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:08.499 15:02:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.499 15:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.499 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:23:08.499 ************************************ 00:23:08.499 START TEST spdkcli_nvmf_tcp 00:23:08.499 ************************************ 00:23:08.499 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:08.758 * Looking for test storage... 00:23:08.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96944 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:08.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96944 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96944 ']' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.758 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.758 [2024-07-12 15:02:47.308771] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:23:08.758 [2024-07-12 15:02:47.309191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96944 ] 00:23:09.016 [2024-07-12 15:02:47.453622] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.016 [2024-07-12 15:02:47.514509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.016 [2024-07-12 15:02:47.514530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.016 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.017 15:02:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:09.017 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:09.017 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:09.017 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:09.017 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:09.017 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:09.017 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:09.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:09.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:09.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:09.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:09.017 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:09.017 ' 00:23:12.299 [2024-07-12 15:02:50.416320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.277 [2024-07-12 15:02:51.717354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:23:15.807 [2024-07-12 15:02:54.107004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:23:17.708 [2024-07-12 15:02:56.156570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:19.127 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:19.127 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:19.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:19.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:19.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:19.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:19.127 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:23:19.386 15:02:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.951 15:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:19.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:19.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:19.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:19.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:23:19.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:23:19.951 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:19.951 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:19.951 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:19.951 ' 00:23:25.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:25.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:25.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:25.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:25.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:25.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:25.218 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:25.218 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:25.218 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96944 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96944 ']' 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96944 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96944 00:23:25.218 killing process with pid 96944 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96944' 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96944 00:23:25.218 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96944 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96944 ']' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96944 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96944 ']' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96944 00:23:25.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96944) - No such process 00:23:25.476 Process with pid 96944 is not found 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96944 is not found' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:25.476 15:03:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:25.476 ************************************ 00:23:25.476 END TEST spdkcli_nvmf_tcp 00:23:25.476 ************************************ 00:23:25.476 00:23:25.476 real 0m16.877s 00:23:25.476 user 0m36.600s 00:23:25.476 sys 0m0.878s 00:23:25.476 15:03:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.476 15:03:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.476 15:03:04 -- common/autotest_common.sh@1142 -- # return 0 00:23:25.476 15:03:04 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:25.476 15:03:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.476 15:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.476 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:23:25.476 ************************************ 00:23:25.476 START TEST nvmf_identify_passthru 00:23:25.476 ************************************ 00:23:25.476 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:25.476 * Looking for test storage... 00:23:25.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:25.734 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.734 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:25.734 15:03:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.734 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.734 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:25.734 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:25.734 Cannot find device "nvmf_tgt_br" 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.734 Cannot find device "nvmf_tgt_br2" 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:25.734 Cannot find device "nvmf_tgt_br" 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:23:25.734 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:25.734 Cannot find device "nvmf_tgt_br2" 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:25.735 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:25.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:23:25.993 00:23:25.993 --- 10.0.0.2 ping statistics --- 00:23:25.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.993 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:25.993 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:25.993 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:23:25.993 00:23:25.993 --- 10.0.0.3 ping statistics --- 00:23:25.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.993 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:25.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:25.993 00:23:25.993 --- 10.0.0.1 ping statistics --- 00:23:25.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.993 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.993 15:03:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:25.993 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:25.993 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:23:26.252 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:26.252 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.252 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:26.509 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:26.509 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97424 00:23:26.509 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:26.509 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.509 15:03:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97424 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97424 ']' 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.509 15:03:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:26.509 [2024-07-12 15:03:05.002965] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:23:26.509 [2024-07-12 15:03:05.003718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.509 [2024-07-12 15:03:05.147982] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.766 [2024-07-12 15:03:05.223896] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.766 [2024-07-12 15:03:05.224190] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.766 [2024-07-12 15:03:05.224446] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.766 [2024-07-12 15:03:05.224764] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.766 [2024-07-12 15:03:05.224902] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.766 [2024-07-12 15:03:05.225173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.766 [2024-07-12 15:03:05.225245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.766 [2024-07-12 15:03:05.225322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.766 [2024-07-12 15:03:05.225312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:23:27.331 15:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.331 15:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.331 15:03:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.588 [2024-07-12 15:03:06.015901] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:27.588 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.588 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.588 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.588 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.588 [2024-07-12 15:03:06.025338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.588 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 Nvme0n1 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 [2024-07-12 15:03:06.164146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:27.589 [ 00:23:27.589 { 00:23:27.589 "allow_any_host": true, 00:23:27.589 "hosts": [], 00:23:27.589 "listen_addresses": [], 00:23:27.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:27.589 "subtype": "Discovery" 00:23:27.589 }, 00:23:27.589 { 00:23:27.589 "allow_any_host": true, 00:23:27.589 "hosts": [], 00:23:27.589 "listen_addresses": [ 00:23:27.589 { 00:23:27.589 "adrfam": "IPv4", 00:23:27.589 "traddr": "10.0.0.2", 00:23:27.589 "trsvcid": "4420", 00:23:27.589 "trtype": "TCP" 00:23:27.589 } 00:23:27.589 ], 00:23:27.589 "max_cntlid": 65519, 00:23:27.589 "max_namespaces": 1, 00:23:27.589 "min_cntlid": 1, 00:23:27.589 "model_number": "SPDK bdev Controller", 00:23:27.589 "namespaces": [ 00:23:27.589 { 00:23:27.589 "bdev_name": "Nvme0n1", 00:23:27.589 "name": "Nvme0n1", 00:23:27.589 "nguid": "EB69A7788A6C46E9AC704377A2909328", 00:23:27.589 "nsid": 1, 00:23:27.589 "uuid": "eb69a778-8a6c-46e9-ac70-4377a2909328" 00:23:27.589 } 00:23:27.589 ], 00:23:27.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.589 "serial_number": "SPDK00000000000001", 00:23:27.589 "subtype": "NVMe" 00:23:27.589 } 00:23:27.589 ] 00:23:27.589 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:27.589 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:27.846 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:23:27.846 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:27.846 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:27.846 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:28.105 15:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.105 rmmod nvme_tcp 00:23:28.105 rmmod nvme_fabrics 00:23:28.105 rmmod nvme_keyring 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97424 ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97424 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97424 ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97424 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97424 00:23:28.105 killing process with pid 97424 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97424' 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97424 00:23:28.105 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97424 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.363 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:28.363 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.363 15:03:06 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:28.363 00:23:28.363 real 0m2.898s 00:23:28.363 user 0m7.159s 00:23:28.363 sys 0m0.704s 00:23:28.363 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.363 15:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:28.363 ************************************ 00:23:28.363 END TEST nvmf_identify_passthru 00:23:28.363 ************************************ 00:23:28.363 15:03:06 -- common/autotest_common.sh@1142 -- # return 0 00:23:28.363 15:03:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:28.363 15:03:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:28.363 15:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.363 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:23:28.363 ************************************ 00:23:28.363 START TEST nvmf_dif 00:23:28.363 ************************************ 00:23:28.363 15:03:06 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:28.622 * Looking for test storage... 00:23:28.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:28.622 15:03:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.622 15:03:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.622 15:03:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.622 15:03:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.622 15:03:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.622 15:03:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.622 15:03:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:28.622 15:03:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:28.622 15:03:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.622 15:03:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:28.622 15:03:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:28.622 Cannot find device "nvmf_tgt_br" 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.622 Cannot find device "nvmf_tgt_br2" 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:28.622 Cannot find device "nvmf_tgt_br" 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:28.622 Cannot find device "nvmf_tgt_br2" 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:28.622 15:03:07 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:28.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:23:28.923 00:23:28.923 --- 10.0.0.2 ping statistics --- 00:23:28.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.923 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:28.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:28.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:28.923 00:23:28.923 --- 10.0.0.3 ping statistics --- 00:23:28.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.923 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:28.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:28.923 00:23:28.923 --- 10.0.0.1 ping statistics --- 00:23:28.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.923 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:28.923 15:03:07 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.183 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.183 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.183 15:03:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:29.183 15:03:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97769 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:29.183 15:03:07 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97769 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97769 ']' 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.183 15:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.441 [2024-07-12 15:03:07.881619] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:23:29.441 [2024-07-12 15:03:07.882499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.441 [2024-07-12 15:03:08.022554] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.699 [2024-07-12 15:03:08.115659] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.699 [2024-07-12 15:03:08.115716] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.699 [2024-07-12 15:03:08.115731] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.699 [2024-07-12 15:03:08.115742] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.699 [2024-07-12 15:03:08.115752] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.699 [2024-07-12 15:03:08.115781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:30.631 15:03:09 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 15:03:09 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.631 15:03:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:30.631 15:03:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 [2024-07-12 15:03:09.052214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.631 15:03:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 ************************************ 00:23:30.631 START TEST fio_dif_1_default 00:23:30.631 ************************************ 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 bdev_null0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.631 [2024-07-12 15:03:09.096344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.631 { 00:23:30.631 "params": { 00:23:30.631 "name": "Nvme$subsystem", 00:23:30.631 "trtype": "$TEST_TRANSPORT", 00:23:30.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.631 "adrfam": "ipv4", 00:23:30.631 "trsvcid": "$NVMF_PORT", 00:23:30.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.631 "hdgst": ${hdgst:-false}, 00:23:30.631 "ddgst": ${ddgst:-false} 00:23:30.631 }, 00:23:30.631 "method": "bdev_nvme_attach_controller" 00:23:30.631 } 00:23:30.631 EOF 00:23:30.631 )") 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.631 "params": { 00:23:30.631 "name": "Nvme0", 00:23:30.631 "trtype": "tcp", 00:23:30.631 "traddr": "10.0.0.2", 00:23:30.631 "adrfam": "ipv4", 00:23:30.631 "trsvcid": "4420", 00:23:30.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.631 "hdgst": false, 00:23:30.631 "ddgst": false 00:23:30.631 }, 00:23:30.631 "method": "bdev_nvme_attach_controller" 00:23:30.631 }' 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.631 15:03:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.889 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.889 fio-3.35 00:23:30.889 Starting 1 thread 00:23:43.144 00:23:43.144 filename0: (groupid=0, jobs=1): err= 0: pid=97859: Fri Jul 12 15:03:19 2024 00:23:43.144 read: IOPS=980, BW=3922KiB/s (4016kB/s)(38.3MiB/10004msec) 00:23:43.144 slat (nsec): min=7449, max=82813, avg=10205.73, stdev=6103.02 00:23:43.144 clat (usec): min=457, max=42127, avg=4047.48, stdev=11336.75 00:23:43.144 lat (usec): min=464, max=42141, avg=4057.69, stdev=11337.69 00:23:43.144 clat percentiles (usec): 00:23:43.144 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 494], 00:23:43.144 | 30.00th=[ 506], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 611], 00:23:43.144 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 824], 95.00th=[40633], 00:23:43.144 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:23:43.145 | 99.99th=[42206] 00:23:43.145 bw ( KiB/s): min= 926, max=15904, per=99.98%, avg=3921.50, stdev=3270.83, samples=20 00:23:43.145 iops : min= 231, max= 3976, avg=980.35, stdev=817.73, samples=20 00:23:43.145 lat (usec) : 500=26.43%, 750=62.53%, 1000=2.23% 00:23:43.145 lat (msec) : 2=0.16%, 10=0.04%, 50=8.61% 00:23:43.145 cpu : usr=90.42%, sys=8.55%, ctx=77, majf=0, minf=9 00:23:43.145 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.145 issued rwts: total=9808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.145 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:43.145 00:23:43.145 Run status group 0 (all jobs): 00:23:43.145 READ: bw=3922KiB/s (4016kB/s), 3922KiB/s-3922KiB/s (4016kB/s-4016kB/s), io=38.3MiB (40.2MB), run=10004-10004msec 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 ************************************ 00:23:43.145 END TEST fio_dif_1_default 00:23:43.145 ************************************ 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 00:23:43.145 real 0m10.942s 00:23:43.145 user 0m9.679s 00:23:43.145 sys 0m1.099s 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:43.145 15:03:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:43.145 15:03:20 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:43.145 15:03:20 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 ************************************ 00:23:43.145 START TEST fio_dif_1_multi_subsystems 00:23:43.145 ************************************ 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 bdev_null0 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 [2024-07-12 15:03:20.084443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 bdev_null1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:43.145 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.146 { 00:23:43.146 "params": { 00:23:43.146 "name": "Nvme$subsystem", 00:23:43.146 "trtype": "$TEST_TRANSPORT", 00:23:43.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.146 "adrfam": "ipv4", 00:23:43.146 "trsvcid": "$NVMF_PORT", 00:23:43.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.146 "hdgst": ${hdgst:-false}, 00:23:43.146 "ddgst": ${ddgst:-false} 00:23:43.146 }, 00:23:43.146 "method": "bdev_nvme_attach_controller" 00:23:43.146 } 00:23:43.146 EOF 00:23:43.146 )") 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.146 { 00:23:43.146 "params": { 00:23:43.146 "name": "Nvme$subsystem", 00:23:43.146 "trtype": "$TEST_TRANSPORT", 00:23:43.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.146 "adrfam": "ipv4", 00:23:43.146 "trsvcid": "$NVMF_PORT", 00:23:43.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.146 "hdgst": ${hdgst:-false}, 00:23:43.146 "ddgst": ${ddgst:-false} 00:23:43.146 }, 00:23:43.146 "method": "bdev_nvme_attach_controller" 00:23:43.146 } 00:23:43.146 EOF 00:23:43.146 )") 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.146 "params": { 00:23:43.146 "name": "Nvme0", 00:23:43.146 "trtype": "tcp", 00:23:43.146 "traddr": "10.0.0.2", 00:23:43.146 "adrfam": "ipv4", 00:23:43.146 "trsvcid": "4420", 00:23:43.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:43.146 "hdgst": false, 00:23:43.146 "ddgst": false 00:23:43.146 }, 00:23:43.146 "method": "bdev_nvme_attach_controller" 00:23:43.146 },{ 00:23:43.146 "params": { 00:23:43.146 "name": "Nvme1", 00:23:43.146 "trtype": "tcp", 00:23:43.146 "traddr": "10.0.0.2", 00:23:43.146 "adrfam": "ipv4", 00:23:43.146 "trsvcid": "4420", 00:23:43.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.146 "hdgst": false, 00:23:43.146 "ddgst": false 00:23:43.146 }, 00:23:43.146 "method": "bdev_nvme_attach_controller" 00:23:43.146 }' 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:43.146 15:03:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:43.146 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:43.146 fio-3.35 00:23:43.146 Starting 2 threads 00:23:53.204 00:23:53.204 filename0: (groupid=0, jobs=1): err= 0: pid=98007: Fri Jul 12 15:03:30 2024 00:23:53.204 read: IOPS=228, BW=913KiB/s (935kB/s)(9152KiB/10023msec) 00:23:53.204 slat (nsec): min=4935, max=67425, avg=11596.78, stdev=7783.86 00:23:53.204 clat (usec): min=462, max=42076, avg=17483.59, stdev=19967.60 00:23:53.204 lat (usec): min=470, max=42104, avg=17495.18, stdev=19967.72 00:23:53.204 clat percentiles (usec): 00:23:53.204 | 1.00th=[ 474], 5.00th=[ 490], 10.00th=[ 498], 20.00th=[ 519], 00:23:53.204 | 30.00th=[ 545], 40.00th=[ 611], 50.00th=[ 742], 60.00th=[40633], 00:23:53.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:23:53.204 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:53.204 | 99.99th=[42206] 00:23:53.204 bw ( KiB/s): min= 448, max= 2464, per=52.16%, avg=913.60, stdev=440.29, samples=20 00:23:53.204 iops : min= 112, max= 616, avg=228.40, stdev=110.07, samples=20 00:23:53.204 lat (usec) : 500=10.62%, 750=39.47%, 1000=2.80% 00:23:53.204 lat (msec) : 2=5.33%, 10=0.17%, 50=41.61% 00:23:53.204 cpu : usr=95.39%, sys=4.06%, ctx=15, majf=0, minf=9 00:23:53.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.204 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:53.204 filename1: (groupid=0, jobs=1): err= 0: pid=98008: Fri Jul 12 15:03:30 2024 00:23:53.204 read: IOPS=209, BW=838KiB/s (858kB/s)(8400KiB/10028msec) 00:23:53.204 slat (nsec): min=7848, max=63229, avg=11819.83, stdev=8318.21 00:23:53.204 clat (usec): min=454, max=41864, avg=19059.83, stdev=20142.29 00:23:53.204 lat (usec): min=462, max=41898, avg=19071.65, stdev=20142.42 00:23:53.204 clat percentiles (usec): 00:23:53.204 | 1.00th=[ 474], 5.00th=[ 490], 10.00th=[ 502], 20.00th=[ 523], 00:23:53.204 | 30.00th=[ 553], 40.00th=[ 652], 50.00th=[ 1057], 60.00th=[40633], 00:23:53.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:53.204 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:23:53.204 | 99.99th=[41681] 00:23:53.204 bw ( KiB/s): min= 480, max= 1728, per=47.88%, avg=838.45, stdev=334.88, samples=20 00:23:53.204 iops : min= 120, max= 432, avg=209.60, stdev=83.73, samples=20 00:23:53.204 lat (usec) : 500=8.90%, 750=33.38%, 1000=6.19% 00:23:53.204 lat (msec) : 2=5.81%, 10=0.19%, 50=45.52% 00:23:53.204 cpu : usr=95.34%, sys=4.09%, ctx=165, majf=0, minf=0 00:23:53.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.204 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.204 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:53.204 00:23:53.204 Run status group 0 (all jobs): 00:23:53.204 READ: bw=1750KiB/s (1792kB/s), 838KiB/s-913KiB/s (858kB/s-935kB/s), io=17.1MiB (18.0MB), run=10023-10028msec 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 ************************************ 00:23:53.205 END TEST fio_dif_1_multi_subsystems 00:23:53.205 ************************************ 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 00:23:53.205 real 0m11.093s 00:23:53.205 user 0m19.855s 00:23:53.205 sys 0m1.035s 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:53.205 15:03:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:53.205 15:03:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:53.205 15:03:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 ************************************ 00:23:53.205 START TEST fio_dif_rand_params 00:23:53.205 ************************************ 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 bdev_null0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.205 [2024-07-12 15:03:31.232715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.205 { 00:23:53.205 "params": { 00:23:53.205 "name": "Nvme$subsystem", 00:23:53.205 "trtype": "$TEST_TRANSPORT", 00:23:53.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.205 "adrfam": "ipv4", 00:23:53.205 "trsvcid": "$NVMF_PORT", 00:23:53.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.205 "hdgst": ${hdgst:-false}, 00:23:53.205 "ddgst": ${ddgst:-false} 00:23:53.205 }, 00:23:53.205 "method": "bdev_nvme_attach_controller" 00:23:53.205 } 00:23:53.205 EOF 00:23:53.205 )") 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.205 "params": { 00:23:53.205 "name": "Nvme0", 00:23:53.205 "trtype": "tcp", 00:23:53.205 "traddr": "10.0.0.2", 00:23:53.205 "adrfam": "ipv4", 00:23:53.205 "trsvcid": "4420", 00:23:53.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:53.205 "hdgst": false, 00:23:53.205 "ddgst": false 00:23:53.205 }, 00:23:53.205 "method": "bdev_nvme_attach_controller" 00:23:53.205 }' 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:53.205 15:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.205 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:53.205 ... 00:23:53.205 fio-3.35 00:23:53.205 Starting 3 threads 00:23:58.474 00:23:58.474 filename0: (groupid=0, jobs=1): err= 0: pid=98164: Fri Jul 12 15:03:36 2024 00:23:58.474 read: IOPS=178, BW=22.3MiB/s (23.3MB/s)(112MiB/5047msec) 00:23:58.474 slat (nsec): min=5976, max=56680, avg=19127.29, stdev=7545.11 00:23:58.474 clat (usec): min=9005, max=48296, avg=16722.73, stdev=2958.06 00:23:58.474 lat (usec): min=9041, max=48322, avg=16741.86, stdev=2957.67 00:23:58.474 clat percentiles (usec): 00:23:58.474 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[14222], 20.00th=[15270], 00:23:58.474 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16450], 60.00th=[16909], 00:23:58.474 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20579], 95.00th=[21365], 00:23:58.474 | 99.00th=[23200], 99.50th=[23987], 99.90th=[48497], 99.95th=[48497], 00:23:58.474 | 99.99th=[48497] 00:23:58.474 bw ( KiB/s): min=19456, max=26112, per=28.58%, avg=22942.20, stdev=1859.66, samples=10 00:23:58.474 iops : min= 152, max= 204, avg=179.20, stdev=14.52, samples=10 00:23:58.474 lat (msec) : 10=2.89%, 20=84.09%, 50=13.01% 00:23:58.474 cpu : usr=91.78%, sys=6.48%, ctx=36, majf=0, minf=9 00:23:58.474 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.474 issued rwts: total=899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.474 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.474 filename0: (groupid=0, jobs=1): err= 0: pid=98165: Fri Jul 12 15:03:36 2024 00:23:58.474 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5006msec) 00:23:58.474 slat (nsec): min=5290, max=77483, avg=16318.70, stdev=6373.94 00:23:58.474 clat (usec): min=6784, max=55251, avg=14024.97, stdev=4248.57 00:23:58.474 lat (usec): min=6795, max=55274, avg=14041.29, stdev=4249.30 00:23:58.474 clat percentiles (usec): 00:23:58.474 | 1.00th=[ 7504], 5.00th=[10552], 10.00th=[11469], 20.00th=[12256], 00:23:58.474 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:23:58.474 | 70.00th=[14615], 80.00th=[15533], 90.00th=[16909], 95.00th=[17695], 00:23:58.474 | 99.00th=[20317], 99.50th=[53216], 99.90th=[54264], 99.95th=[55313], 00:23:58.474 | 99.99th=[55313] 00:23:58.474 bw ( KiB/s): min=24064, max=31232, per=34.00%, avg=27295.00, stdev=2325.41, samples=10 00:23:58.474 iops : min= 188, max= 244, avg=213.20, stdev=18.16, samples=10 00:23:58.474 lat (msec) : 10=4.30%, 20=94.67%, 50=0.19%, 100=0.84% 00:23:58.474 cpu : usr=92.15%, sys=6.25%, ctx=6, majf=0, minf=0 00:23:58.474 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.474 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.474 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.474 filename0: (groupid=0, jobs=1): err= 0: pid=98166: Fri Jul 12 15:03:36 2024 00:23:58.474 read: IOPS=239, BW=29.9MiB/s (31.3MB/s)(150MiB/5007msec) 00:23:58.474 slat (nsec): min=4729, max=45935, avg=15678.45, stdev=4892.14 00:23:58.474 clat (usec): min=6742, max=55098, avg=12525.09, stdev=4157.19 00:23:58.474 lat (usec): min=6755, max=55124, avg=12540.76, stdev=4157.63 00:23:58.474 clat percentiles (usec): 00:23:58.474 | 1.00th=[ 7570], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10814], 00:23:58.474 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:23:58.474 | 70.00th=[12387], 80.00th=[14484], 90.00th=[15795], 95.00th=[16450], 00:23:58.474 | 99.00th=[19792], 99.50th=[52691], 99.90th=[54789], 99.95th=[55313], 00:23:58.474 | 99.99th=[55313] 00:23:58.474 bw ( KiB/s): min=25600, max=35584, per=38.08%, avg=30566.40, stdev=2951.60, samples=10 00:23:58.474 iops : min= 200, max= 278, avg=238.80, stdev=23.06, samples=10 00:23:58.474 lat (msec) : 10=7.10%, 20=91.98%, 50=0.17%, 100=0.75% 00:23:58.475 cpu : usr=91.83%, sys=6.57%, ctx=7, majf=0, minf=0 00:23:58.475 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.475 issued rwts: total=1197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.475 00:23:58.475 Run status group 0 (all jobs): 00:23:58.475 READ: bw=78.4MiB/s (82.2MB/s), 22.3MiB/s-29.9MiB/s (23.3MB/s-31.3MB/s), io=396MiB (415MB), run=5006-5047msec 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 bdev_null0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 [2024-07-12 15:03:37.215183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 bdev_null1 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 bdev_null2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.759 { 00:23:58.759 "params": { 00:23:58.759 "name": "Nvme$subsystem", 00:23:58.759 "trtype": "$TEST_TRANSPORT", 00:23:58.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.759 "adrfam": "ipv4", 00:23:58.759 "trsvcid": "$NVMF_PORT", 00:23:58.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.759 "hdgst": ${hdgst:-false}, 00:23:58.759 "ddgst": ${ddgst:-false} 00:23:58.759 }, 00:23:58.759 "method": "bdev_nvme_attach_controller" 00:23:58.759 } 00:23:58.759 EOF 00:23:58.759 )") 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.759 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.760 { 00:23:58.760 "params": { 00:23:58.760 "name": "Nvme$subsystem", 00:23:58.760 "trtype": "$TEST_TRANSPORT", 00:23:58.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.760 "adrfam": "ipv4", 00:23:58.760 "trsvcid": "$NVMF_PORT", 00:23:58.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.760 "hdgst": ${hdgst:-false}, 00:23:58.760 "ddgst": ${ddgst:-false} 00:23:58.760 }, 00:23:58.760 "method": "bdev_nvme_attach_controller" 00:23:58.760 } 00:23:58.760 EOF 00:23:58.760 )") 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.760 { 00:23:58.760 "params": { 00:23:58.760 "name": "Nvme$subsystem", 00:23:58.760 "trtype": "$TEST_TRANSPORT", 00:23:58.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.760 "adrfam": "ipv4", 00:23:58.760 "trsvcid": "$NVMF_PORT", 00:23:58.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.760 "hdgst": ${hdgst:-false}, 00:23:58.760 "ddgst": ${ddgst:-false} 00:23:58.760 }, 00:23:58.760 "method": "bdev_nvme_attach_controller" 00:23:58.760 } 00:23:58.760 EOF 00:23:58.760 )") 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:58.760 "params": { 00:23:58.760 "name": "Nvme0", 00:23:58.760 "trtype": "tcp", 00:23:58.760 "traddr": "10.0.0.2", 00:23:58.760 "adrfam": "ipv4", 00:23:58.760 "trsvcid": "4420", 00:23:58.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.760 "hdgst": false, 00:23:58.760 "ddgst": false 00:23:58.760 }, 00:23:58.760 "method": "bdev_nvme_attach_controller" 00:23:58.760 },{ 00:23:58.760 "params": { 00:23:58.760 "name": "Nvme1", 00:23:58.760 "trtype": "tcp", 00:23:58.760 "traddr": "10.0.0.2", 00:23:58.760 "adrfam": "ipv4", 00:23:58.760 "trsvcid": "4420", 00:23:58.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.760 "hdgst": false, 00:23:58.760 "ddgst": false 00:23:58.760 }, 00:23:58.760 "method": "bdev_nvme_attach_controller" 00:23:58.760 },{ 00:23:58.760 "params": { 00:23:58.760 "name": "Nvme2", 00:23:58.760 "trtype": "tcp", 00:23:58.760 "traddr": "10.0.0.2", 00:23:58.760 "adrfam": "ipv4", 00:23:58.760 "trsvcid": "4420", 00:23:58.760 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.760 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.760 "hdgst": false, 00:23:58.760 "ddgst": false 00:23:58.760 }, 00:23:58.760 "method": "bdev_nvme_attach_controller" 00:23:58.760 }' 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.760 15:03:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.018 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.018 ... 00:23:59.018 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.018 ... 00:23:59.018 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.018 ... 00:23:59.018 fio-3.35 00:23:59.018 Starting 24 threads 00:24:11.215 00:24:11.215 filename0: (groupid=0, jobs=1): err= 0: pid=98258: Fri Jul 12 15:03:48 2024 00:24:11.215 read: IOPS=209, BW=839KiB/s (859kB/s)(8436KiB/10055msec) 00:24:11.215 slat (usec): min=3, max=7023, avg=18.93, stdev=191.59 00:24:11.215 clat (msec): min=3, max=270, avg=76.05, stdev=35.31 00:24:11.215 lat (msec): min=3, max=271, avg=76.07, stdev=35.31 00:24:11.215 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 53], 00:24:11.216 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 79], 00:24:11.216 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 113], 95.00th=[ 136], 00:24:11.216 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:24:11.216 | 99.99th=[ 271] 00:24:11.216 bw ( KiB/s): min= 552, max= 1152, per=4.83%, avg=836.80, stdev=159.25, samples=20 00:24:11.216 iops : min= 138, max= 288, avg=209.20, stdev=39.81, samples=20 00:24:11.216 lat (msec) : 4=0.76%, 10=2.28%, 20=0.76%, 50=13.47%, 100=65.72% 00:24:11.216 lat (msec) : 250=16.26%, 500=0.76% 00:24:11.216 cpu : usr=42.94%, sys=1.41%, ctx=1272, majf=0, minf=0 00:24:11.216 IO depths : 1=1.9%, 2=4.3%, 4=13.2%, 8=69.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98259: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=157, BW=631KiB/s (646kB/s)(6324KiB/10023msec) 00:24:11.216 slat (usec): min=3, max=8024, avg=20.26, stdev=225.42 00:24:11.216 clat (msec): min=22, max=455, avg=101.25, stdev=48.30 00:24:11.216 lat (msec): min=22, max=455, avg=101.27, stdev=48.30 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 43], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 72], 00:24:11.216 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 103], 00:24:11.216 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 155], 00:24:11.216 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:24:11.216 | 99.99th=[ 456] 00:24:11.216 bw ( KiB/s): min= 128, max= 768, per=3.61%, avg=626.00, stdev=165.58, samples=20 00:24:11.216 iops : min= 32, max= 192, avg=156.50, stdev=41.40, samples=20 00:24:11.216 lat (msec) : 50=2.47%, 100=56.74%, 250=38.77%, 500=2.02% 00:24:11.216 cpu : usr=39.19%, sys=1.29%, ctx=1114, majf=0, minf=9 00:24:11.216 IO depths : 1=4.0%, 2=8.4%, 4=19.4%, 8=59.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=1581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98260: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=163, BW=654KiB/s (670kB/s)(6560KiB/10027msec) 00:24:11.216 slat (usec): min=7, max=8051, avg=18.44, stdev=200.26 00:24:11.216 clat (msec): min=40, max=452, avg=97.58, stdev=47.97 00:24:11.216 lat (msec): min=40, max=452, avg=97.60, stdev=47.98 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 70], 00:24:11.216 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 102], 00:24:11.216 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 138], 95.00th=[ 153], 00:24:11.216 | 99.00th=[ 255], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:24:11.216 | 99.99th=[ 451] 00:24:11.216 bw ( KiB/s): min= 128, max= 1080, per=3.76%, avg=651.65, stdev=200.15, samples=20 00:24:11.216 iops : min= 32, max= 270, avg=162.90, stdev=50.03, samples=20 00:24:11.216 lat (msec) : 50=7.26%, 100=51.95%, 250=38.84%, 500=1.95% 00:24:11.216 cpu : usr=34.14%, sys=1.34%, ctx=1506, majf=0, minf=9 00:24:11.216 IO depths : 1=2.9%, 2=6.2%, 4=16.0%, 8=65.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=91.4%, 8=3.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98261: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=201, BW=807KiB/s (826kB/s)(8104KiB/10042msec) 00:24:11.216 slat (usec): min=7, max=8050, avg=25.96, stdev=328.24 00:24:11.216 clat (msec): min=24, max=311, avg=79.14, stdev=35.14 00:24:11.216 lat (msec): min=24, max=311, avg=79.16, stdev=35.14 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:24:11.216 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:24:11.216 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 131], 00:24:11.216 | 99.00th=[ 228], 99.50th=[ 296], 99.90th=[ 313], 99.95th=[ 313], 00:24:11.216 | 99.99th=[ 313] 00:24:11.216 bw ( KiB/s): min= 256, max= 1040, per=4.64%, avg=803.75, stdev=187.19, samples=20 00:24:11.216 iops : min= 64, max= 260, avg=200.90, stdev=46.81, samples=20 00:24:11.216 lat (msec) : 50=18.76%, 100=64.46%, 250=15.89%, 500=0.89% 00:24:11.216 cpu : usr=32.00%, sys=1.06%, ctx=898, majf=0, minf=9 00:24:11.216 IO depths : 1=0.1%, 2=0.2%, 4=4.8%, 8=80.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=89.0%, 8=7.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98262: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=158, BW=634KiB/s (649kB/s)(6348KiB/10020msec) 00:24:11.216 slat (usec): min=3, max=8035, avg=21.90, stdev=284.69 00:24:11.216 clat (msec): min=44, max=461, avg=100.79, stdev=47.50 00:24:11.216 lat (msec): min=44, max=461, avg=100.81, stdev=47.50 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 48], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 73], 00:24:11.216 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 100], 00:24:11.216 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 134], 95.00th=[ 144], 00:24:11.216 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:24:11.216 | 99.99th=[ 460] 00:24:11.216 bw ( KiB/s): min= 128, max= 896, per=3.63%, avg=628.40, stdev=174.49, samples=20 00:24:11.216 iops : min= 32, max= 224, avg=157.10, stdev=43.62, samples=20 00:24:11.216 lat (msec) : 50=2.52%, 100=58.03%, 250=37.43%, 500=2.02% 00:24:11.216 cpu : usr=39.23%, sys=1.44%, ctx=1162, majf=0, minf=9 00:24:11.216 IO depths : 1=3.0%, 2=6.9%, 4=18.1%, 8=62.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=1587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98263: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=165, BW=660KiB/s (676kB/s)(6604KiB/10004msec) 00:24:11.216 slat (usec): min=4, max=8057, avg=29.15, stdev=356.11 00:24:11.216 clat (msec): min=31, max=442, avg=96.74, stdev=47.30 00:24:11.216 lat (msec): min=31, max=442, avg=96.77, stdev=47.32 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 46], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 72], 00:24:11.216 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 96], 00:24:11.216 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 144], 00:24:11.216 | 99.00th=[ 313], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:24:11.216 | 99.99th=[ 443] 00:24:11.216 bw ( KiB/s): min= 128, max= 1024, per=3.78%, avg=654.47, stdev=193.39, samples=19 00:24:11.216 iops : min= 32, max= 256, avg=163.58, stdev=48.38, samples=19 00:24:11.216 lat (msec) : 50=3.33%, 100=57.90%, 250=36.83%, 500=1.94% 00:24:11.216 cpu : usr=34.18%, sys=0.94%, ctx=964, majf=0, minf=9 00:24:11.216 IO depths : 1=2.1%, 2=4.8%, 4=13.8%, 8=68.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=1651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98264: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=212, BW=852KiB/s (872kB/s)(8576KiB/10068msec) 00:24:11.216 slat (usec): min=3, max=4020, avg=14.93, stdev=118.27 00:24:11.216 clat (msec): min=8, max=268, avg=75.00, stdev=32.40 00:24:11.216 lat (msec): min=8, max=268, avg=75.02, stdev=32.40 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 54], 00:24:11.216 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:24:11.216 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 123], 00:24:11.216 | 99.00th=[ 245], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:24:11.216 | 99.99th=[ 271] 00:24:11.216 bw ( KiB/s): min= 634, max= 1072, per=4.91%, avg=850.90, stdev=124.33, samples=20 00:24:11.216 iops : min= 158, max= 268, avg=212.70, stdev=31.13, samples=20 00:24:11.216 lat (msec) : 10=0.75%, 20=1.49%, 50=12.50%, 100=73.18%, 250=11.43% 00:24:11.216 lat (msec) : 500=0.65% 00:24:11.216 cpu : usr=41.76%, sys=1.39%, ctx=1337, majf=0, minf=9 00:24:11.216 IO depths : 1=1.6%, 2=3.6%, 4=11.8%, 8=71.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename0: (groupid=0, jobs=1): err= 0: pid=98265: Fri Jul 12 15:03:48 2024 00:24:11.216 read: IOPS=177, BW=711KiB/s (728kB/s)(7136KiB/10034msec) 00:24:11.216 slat (usec): min=7, max=2543, avg=13.42, stdev=60.31 00:24:11.216 clat (msec): min=34, max=406, avg=89.81, stdev=41.76 00:24:11.216 lat (msec): min=34, max=406, avg=89.82, stdev=41.76 00:24:11.216 clat percentiles (msec): 00:24:11.216 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:24:11.216 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 89], 00:24:11.216 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 129], 95.00th=[ 142], 00:24:11.216 | 99.00th=[ 226], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:24:11.216 | 99.99th=[ 405] 00:24:11.216 bw ( KiB/s): min= 127, max= 960, per=4.08%, avg=707.15, stdev=180.95, samples=20 00:24:11.216 iops : min= 31, max= 240, avg=176.75, stdev=45.37, samples=20 00:24:11.216 lat (msec) : 50=7.40%, 100=64.80%, 250=26.91%, 500=0.90% 00:24:11.216 cpu : usr=33.61%, sys=1.02%, ctx=1441, majf=0, minf=9 00:24:11.216 IO depths : 1=1.2%, 2=2.6%, 4=11.7%, 8=72.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:24:11.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.216 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.216 filename1: (groupid=0, jobs=1): err= 0: pid=98266: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=194, BW=777KiB/s (796kB/s)(7792KiB/10029msec) 00:24:11.217 slat (usec): min=7, max=8028, avg=15.83, stdev=181.72 00:24:11.217 clat (msec): min=26, max=281, avg=82.27, stdev=33.86 00:24:11.217 lat (msec): min=26, max=281, avg=82.29, stdev=33.86 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:24:11.217 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 84], 00:24:11.217 | 70.00th=[ 90], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 144], 00:24:11.217 | 99.00th=[ 218], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 284], 00:24:11.217 | 99.99th=[ 284] 00:24:11.217 bw ( KiB/s): min= 256, max= 1328, per=4.46%, avg=772.60, stdev=220.50, samples=20 00:24:11.217 iops : min= 64, max= 332, avg=193.15, stdev=55.12, samples=20 00:24:11.217 lat (msec) : 50=13.55%, 100=64.22%, 250=22.13%, 500=0.10% 00:24:11.217 cpu : usr=40.91%, sys=1.45%, ctx=1066, majf=0, minf=9 00:24:11.217 IO depths : 1=0.6%, 2=1.7%, 4=9.1%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98267: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=188, BW=755KiB/s (773kB/s)(7584KiB/10042msec) 00:24:11.217 slat (nsec): min=4884, max=63921, avg=12244.51, stdev=6501.96 00:24:11.217 clat (msec): min=23, max=295, avg=84.66, stdev=35.86 00:24:11.217 lat (msec): min=23, max=295, avg=84.68, stdev=35.87 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 30], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:24:11.217 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:24:11.217 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 132], 95.00th=[ 157], 00:24:11.217 | 99.00th=[ 215], 99.50th=[ 259], 99.90th=[ 296], 99.95th=[ 296], 00:24:11.217 | 99.99th=[ 296] 00:24:11.217 bw ( KiB/s): min= 488, max= 1120, per=4.34%, avg=751.70, stdev=182.83, samples=20 00:24:11.217 iops : min= 122, max= 280, avg=187.90, stdev=45.72, samples=20 00:24:11.217 lat (msec) : 50=11.71%, 100=66.88%, 250=20.83%, 500=0.58% 00:24:11.217 cpu : usr=39.10%, sys=1.18%, ctx=1075, majf=0, minf=9 00:24:11.217 IO depths : 1=0.9%, 2=1.8%, 4=8.5%, 8=75.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98268: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=201, BW=807KiB/s (826kB/s)(8112KiB/10053msec) 00:24:11.217 slat (usec): min=4, max=8026, avg=19.82, stdev=251.58 00:24:11.217 clat (msec): min=13, max=292, avg=79.05, stdev=32.90 00:24:11.217 lat (msec): min=13, max=292, avg=79.07, stdev=32.90 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:24:11.217 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:24:11.217 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 112], 95.00th=[ 126], 00:24:11.217 | 99.00th=[ 211], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 292], 00:24:11.217 | 99.99th=[ 292] 00:24:11.217 bw ( KiB/s): min= 400, max= 1120, per=4.64%, avg=804.60, stdev=162.32, samples=20 00:24:11.217 iops : min= 100, max= 280, avg=201.15, stdev=40.58, samples=20 00:24:11.217 lat (msec) : 20=1.58%, 50=12.08%, 100=70.27%, 250=15.29%, 500=0.79% 00:24:11.217 cpu : usr=41.56%, sys=1.32%, ctx=1075, majf=0, minf=9 00:24:11.217 IO depths : 1=0.8%, 2=1.6%, 4=7.8%, 8=77.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98269: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=179, BW=718KiB/s (735kB/s)(7208KiB/10037msec) 00:24:11.217 slat (nsec): min=7829, max=69780, avg=11920.98, stdev=5575.29 00:24:11.217 clat (msec): min=34, max=302, avg=88.94, stdev=39.17 00:24:11.217 lat (msec): min=34, max=302, avg=88.96, stdev=39.17 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:11.217 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 91], 00:24:11.217 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 133], 95.00th=[ 165], 00:24:11.217 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 305], 00:24:11.217 | 99.99th=[ 305] 00:24:11.217 bw ( KiB/s): min= 256, max= 1120, per=4.12%, avg=714.40, stdev=218.03, samples=20 00:24:11.217 iops : min= 64, max= 280, avg=178.60, stdev=54.51, samples=20 00:24:11.217 lat (msec) : 50=13.10%, 100=59.99%, 250=26.03%, 500=0.89% 00:24:11.217 cpu : usr=31.83%, sys=1.26%, ctx=897, majf=0, minf=9 00:24:11.217 IO depths : 1=1.2%, 2=2.6%, 4=9.8%, 8=74.0%, 16=12.4%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98270: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=184, BW=739KiB/s (757kB/s)(7432KiB/10052msec) 00:24:11.217 slat (usec): min=7, max=4052, avg=15.08, stdev=94.00 00:24:11.217 clat (msec): min=31, max=297, avg=86.39, stdev=35.16 00:24:11.217 lat (msec): min=31, max=297, avg=86.40, stdev=35.16 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 62], 00:24:11.217 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:24:11.217 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 136], 00:24:11.217 | 99.00th=[ 251], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 300], 00:24:11.217 | 99.99th=[ 300] 00:24:11.217 bw ( KiB/s): min= 254, max= 944, per=4.25%, avg=736.75, stdev=172.37, samples=20 00:24:11.217 iops : min= 63, max= 236, avg=184.15, stdev=43.16, samples=20 00:24:11.217 lat (msec) : 50=7.80%, 100=67.98%, 250=23.04%, 500=1.18% 00:24:11.217 cpu : usr=36.30%, sys=1.33%, ctx=997, majf=0, minf=9 00:24:11.217 IO depths : 1=1.3%, 2=2.9%, 4=10.7%, 8=72.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=90.6%, 8=5.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98271: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=191, BW=765KiB/s (783kB/s)(7680KiB/10041msec) 00:24:11.217 slat (usec): min=7, max=8029, avg=21.75, stdev=241.84 00:24:11.217 clat (msec): min=34, max=262, avg=83.39, stdev=33.80 00:24:11.217 lat (msec): min=34, max=262, avg=83.41, stdev=33.81 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 57], 00:24:11.217 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:24:11.217 | 70.00th=[ 89], 80.00th=[ 108], 90.00th=[ 126], 95.00th=[ 142], 00:24:11.217 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 262], 99.95th=[ 262], 00:24:11.217 | 99.99th=[ 262] 00:24:11.217 bw ( KiB/s): min= 256, max= 1072, per=4.39%, avg=761.35, stdev=186.57, samples=20 00:24:11.217 iops : min= 64, max= 268, avg=190.30, stdev=46.64, samples=20 00:24:11.217 lat (msec) : 50=10.89%, 100=64.84%, 250=23.80%, 500=0.47% 00:24:11.217 cpu : usr=44.22%, sys=1.54%, ctx=1359, majf=0, minf=9 00:24:11.217 IO depths : 1=1.6%, 2=3.6%, 4=11.8%, 8=71.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98272: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=175, BW=701KiB/s (718kB/s)(7036KiB/10039msec) 00:24:11.217 slat (usec): min=5, max=8093, avg=18.82, stdev=215.10 00:24:11.217 clat (msec): min=36, max=286, avg=91.05, stdev=35.79 00:24:11.217 lat (msec): min=36, max=286, avg=91.06, stdev=35.78 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 69], 00:24:11.217 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 91], 00:24:11.217 | 70.00th=[ 102], 80.00th=[ 110], 90.00th=[ 122], 95.00th=[ 144], 00:24:11.217 | 99.00th=[ 236], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 288], 00:24:11.217 | 99.99th=[ 288] 00:24:11.217 bw ( KiB/s): min= 256, max= 920, per=4.02%, avg=697.00, stdev=157.07, samples=20 00:24:11.217 iops : min= 64, max= 230, avg=174.25, stdev=39.27, samples=20 00:24:11.217 lat (msec) : 50=8.98%, 100=59.75%, 250=30.36%, 500=0.91% 00:24:11.217 cpu : usr=35.39%, sys=1.26%, ctx=970, majf=0, minf=9 00:24:11.217 IO depths : 1=1.5%, 2=3.5%, 4=12.4%, 8=70.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.217 filename1: (groupid=0, jobs=1): err= 0: pid=98273: Fri Jul 12 15:03:48 2024 00:24:11.217 read: IOPS=189, BW=760KiB/s (778kB/s)(7624KiB/10033msec) 00:24:11.217 slat (usec): min=5, max=8020, avg=21.79, stdev=260.71 00:24:11.217 clat (msec): min=34, max=263, avg=83.97, stdev=34.99 00:24:11.217 lat (msec): min=34, max=264, avg=84.00, stdev=34.99 00:24:11.217 clat percentiles (msec): 00:24:11.217 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 62], 00:24:11.217 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:24:11.217 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 118], 95.00th=[ 132], 00:24:11.217 | 99.00th=[ 253], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:24:11.217 | 99.99th=[ 264] 00:24:11.217 bw ( KiB/s): min= 256, max= 1016, per=4.37%, avg=756.00, stdev=178.93, samples=20 00:24:11.217 iops : min= 64, max= 254, avg=189.00, stdev=44.73, samples=20 00:24:11.217 lat (msec) : 50=12.54%, 100=71.14%, 250=14.64%, 500=1.68% 00:24:11.217 cpu : usr=37.90%, sys=1.19%, ctx=1104, majf=0, minf=9 00:24:11.217 IO depths : 1=1.5%, 2=3.1%, 4=10.7%, 8=72.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:24:11.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.217 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98274: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=170, BW=683KiB/s (699kB/s)(6860KiB/10050msec) 00:24:11.218 slat (usec): min=7, max=8053, avg=20.97, stdev=274.18 00:24:11.218 clat (msec): min=36, max=294, avg=93.56, stdev=36.10 00:24:11.218 lat (msec): min=36, max=294, avg=93.58, stdev=36.10 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 40], 5.00th=[ 52], 10.00th=[ 60], 20.00th=[ 72], 00:24:11.218 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 95], 00:24:11.218 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 131], 95.00th=[ 157], 00:24:11.218 | 99.00th=[ 239], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:24:11.218 | 99.99th=[ 296] 00:24:11.218 bw ( KiB/s): min= 254, max= 896, per=3.92%, avg=679.55, stdev=160.10, samples=20 00:24:11.218 iops : min= 63, max= 224, avg=169.85, stdev=40.11, samples=20 00:24:11.218 lat (msec) : 50=4.66%, 100=65.01%, 250=29.39%, 500=0.93% 00:24:11.218 cpu : usr=32.14%, sys=1.02%, ctx=919, majf=0, minf=9 00:24:11.218 IO depths : 1=2.6%, 2=5.6%, 4=15.5%, 8=66.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98275: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=212, BW=851KiB/s (871kB/s)(8544KiB/10044msec) 00:24:11.218 slat (usec): min=3, max=4025, avg=15.83, stdev=122.88 00:24:11.218 clat (msec): min=30, max=294, avg=75.13, stdev=32.45 00:24:11.218 lat (msec): min=30, max=294, avg=75.15, stdev=32.45 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:24:11.218 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 74], 00:24:11.218 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 109], 95.00th=[ 120], 00:24:11.218 | 99.00th=[ 234], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 296], 00:24:11.218 | 99.99th=[ 296] 00:24:11.218 bw ( KiB/s): min= 344, max= 1120, per=4.90%, avg=848.00, stdev=191.88, samples=20 00:24:11.218 iops : min= 86, max= 280, avg=212.00, stdev=47.97, samples=20 00:24:11.218 lat (msec) : 50=13.25%, 100=72.52%, 250=13.48%, 500=0.75% 00:24:11.218 cpu : usr=44.58%, sys=1.58%, ctx=1472, majf=0, minf=9 00:24:11.218 IO depths : 1=1.4%, 2=3.0%, 4=10.7%, 8=73.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=90.1%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98276: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=177, BW=709KiB/s (726kB/s)(7104KiB/10022msec) 00:24:11.218 slat (usec): min=5, max=6033, avg=16.12, stdev=143.05 00:24:11.218 clat (msec): min=36, max=414, avg=90.18, stdev=44.32 00:24:11.218 lat (msec): min=36, max=415, avg=90.20, stdev=44.32 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 64], 00:24:11.218 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 88], 00:24:11.218 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 142], 00:24:11.218 | 99.00th=[ 292], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:24:11.218 | 99.99th=[ 414] 00:24:11.218 bw ( KiB/s): min= 128, max= 992, per=4.06%, avg=704.05, stdev=194.57, samples=20 00:24:11.218 iops : min= 32, max= 248, avg=176.00, stdev=48.65, samples=20 00:24:11.218 lat (msec) : 50=7.88%, 100=63.74%, 250=26.58%, 500=1.80% 00:24:11.218 cpu : usr=34.14%, sys=1.14%, ctx=1474, majf=0, minf=9 00:24:11.218 IO depths : 1=1.8%, 2=3.8%, 4=11.5%, 8=71.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98277: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=169, BW=679KiB/s (695kB/s)(6792KiB/10008msec) 00:24:11.218 slat (usec): min=4, max=8020, avg=16.71, stdev=194.41 00:24:11.218 clat (msec): min=39, max=462, avg=94.16, stdev=47.01 00:24:11.218 lat (msec): min=39, max=462, avg=94.18, stdev=47.01 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 71], 00:24:11.218 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 96], 00:24:11.218 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 144], 00:24:11.218 | 99.00th=[ 292], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:24:11.218 | 99.99th=[ 464] 00:24:11.218 bw ( KiB/s): min= 128, max= 1072, per=3.88%, avg=672.80, stdev=191.31, samples=20 00:24:11.218 iops : min= 32, max= 268, avg=168.20, stdev=47.83, samples=20 00:24:11.218 lat (msec) : 50=7.36%, 100=60.37%, 250=31.21%, 500=1.06% 00:24:11.218 cpu : usr=37.25%, sys=1.26%, ctx=1028, majf=0, minf=9 00:24:11.218 IO depths : 1=2.1%, 2=5.1%, 4=13.7%, 8=67.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=91.4%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98278: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=156, BW=626KiB/s (641kB/s)(6272KiB/10027msec) 00:24:11.218 slat (usec): min=4, max=4020, avg=17.24, stdev=132.02 00:24:11.218 clat (msec): min=41, max=453, avg=102.18, stdev=47.48 00:24:11.218 lat (msec): min=41, max=453, avg=102.20, stdev=47.48 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 73], 00:24:11.218 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 105], 00:24:11.218 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 133], 95.00th=[ 161], 00:24:11.218 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:24:11.218 | 99.99th=[ 456] 00:24:11.218 bw ( KiB/s): min= 128, max= 768, per=3.58%, avg=620.45, stdev=156.57, samples=20 00:24:11.218 iops : min= 32, max= 192, avg=155.10, stdev=39.13, samples=20 00:24:11.218 lat (msec) : 50=2.87%, 100=53.06%, 250=42.03%, 500=2.04% 00:24:11.218 cpu : usr=39.66%, sys=1.25%, ctx=1141, majf=0, minf=9 00:24:11.218 IO depths : 1=2.5%, 2=5.5%, 4=16.1%, 8=65.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=91.4%, 8=3.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98279: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=162, BW=651KiB/s (667kB/s)(6528KiB/10025msec) 00:24:11.218 slat (usec): min=4, max=8023, avg=21.99, stdev=243.08 00:24:11.218 clat (msec): min=37, max=616, avg=98.14, stdev=58.03 00:24:11.218 lat (msec): min=37, max=616, avg=98.16, stdev=58.03 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:24:11.218 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 96], 00:24:11.218 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 134], 95.00th=[ 150], 00:24:11.218 | 99.00th=[ 167], 99.50th=[ 617], 99.90th=[ 617], 99.95th=[ 617], 00:24:11.218 | 99.99th=[ 617] 00:24:11.218 bw ( KiB/s): min= 512, max= 944, per=3.93%, avg=680.21, stdev=125.68, samples=19 00:24:11.218 iops : min= 128, max= 236, avg=170.05, stdev=31.42, samples=19 00:24:11.218 lat (msec) : 50=5.70%, 100=57.54%, 250=35.78%, 750=0.98% 00:24:11.218 cpu : usr=36.72%, sys=1.09%, ctx=992, majf=0, minf=9 00:24:11.218 IO depths : 1=2.7%, 2=5.6%, 4=14.5%, 8=66.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98280: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=157, BW=628KiB/s (643kB/s)(6296KiB/10024msec) 00:24:11.218 slat (usec): min=4, max=8022, avg=22.31, stdev=247.47 00:24:11.218 clat (msec): min=24, max=608, avg=101.74, stdev=58.12 00:24:11.218 lat (msec): min=24, max=608, avg=101.76, stdev=58.12 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 25], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 72], 00:24:11.218 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 106], 00:24:11.218 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 136], 95.00th=[ 146], 00:24:11.218 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:24:11.218 | 99.99th=[ 609] 00:24:11.218 bw ( KiB/s): min= 512, max= 896, per=3.79%, avg=656.00, stdev=105.90, samples=19 00:24:11.218 iops : min= 128, max= 224, avg=164.00, stdev=26.47, samples=19 00:24:11.218 lat (msec) : 50=3.37%, 100=52.67%, 250=42.95%, 750=1.02% 00:24:11.218 cpu : usr=35.66%, sys=1.06%, ctx=967, majf=0, minf=9 00:24:11.218 IO depths : 1=2.3%, 2=5.5%, 4=15.2%, 8=66.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 filename2: (groupid=0, jobs=1): err= 0: pid=98281: Fri Jul 12 15:03:48 2024 00:24:11.218 read: IOPS=184, BW=739KiB/s (756kB/s)(7412KiB/10034msec) 00:24:11.218 slat (usec): min=5, max=8045, avg=22.09, stdev=279.44 00:24:11.218 clat (msec): min=31, max=473, avg=86.47, stdev=49.02 00:24:11.218 lat (msec): min=31, max=473, avg=86.49, stdev=49.02 00:24:11.218 clat percentiles (msec): 00:24:11.218 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:24:11.218 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 83], 00:24:11.218 | 70.00th=[ 93], 80.00th=[ 109], 90.00th=[ 132], 95.00th=[ 153], 00:24:11.218 | 99.00th=[ 241], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:24:11.218 | 99.99th=[ 472] 00:24:11.218 bw ( KiB/s): min= 128, max= 1152, per=4.24%, avg=734.85, stdev=252.02, samples=20 00:24:11.218 iops : min= 32, max= 288, avg=183.70, stdev=63.00, samples=20 00:24:11.218 lat (msec) : 50=11.01%, 100=63.90%, 250=24.23%, 500=0.86% 00:24:11.218 cpu : usr=36.59%, sys=1.07%, ctx=1057, majf=0, minf=9 00:24:11.218 IO depths : 1=0.5%, 2=1.2%, 4=8.0%, 8=77.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:24:11.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 complete : 0=0.0%, 4=89.3%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.218 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.218 00:24:11.218 Run status group 0 (all jobs): 00:24:11.218 READ: bw=16.9MiB/s (17.7MB/s), 626KiB/s-852KiB/s (641kB/s-872kB/s), io=170MiB (179MB), run=10004-10068msec 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 bdev_null0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 [2024-07-12 15:03:48.592063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 bdev_null1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.219 { 00:24:11.219 "params": { 00:24:11.219 "name": "Nvme$subsystem", 00:24:11.219 "trtype": "$TEST_TRANSPORT", 00:24:11.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.219 "adrfam": "ipv4", 00:24:11.219 "trsvcid": "$NVMF_PORT", 00:24:11.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.219 "hdgst": ${hdgst:-false}, 00:24:11.219 "ddgst": ${ddgst:-false} 00:24:11.219 }, 00:24:11.219 "method": "bdev_nvme_attach_controller" 00:24:11.219 } 00:24:11.219 EOF 00:24:11.219 )") 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.219 { 00:24:11.219 "params": { 00:24:11.219 "name": "Nvme$subsystem", 00:24:11.219 "trtype": "$TEST_TRANSPORT", 00:24:11.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.219 "adrfam": "ipv4", 00:24:11.219 "trsvcid": "$NVMF_PORT", 00:24:11.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.219 "hdgst": ${hdgst:-false}, 00:24:11.219 "ddgst": ${ddgst:-false} 00:24:11.219 }, 00:24:11.219 "method": "bdev_nvme_attach_controller" 00:24:11.219 } 00:24:11.219 EOF 00:24:11.219 )") 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:11.219 15:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:11.220 "params": { 00:24:11.220 "name": "Nvme0", 00:24:11.220 "trtype": "tcp", 00:24:11.220 "traddr": "10.0.0.2", 00:24:11.220 "adrfam": "ipv4", 00:24:11.220 "trsvcid": "4420", 00:24:11.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:11.220 "hdgst": false, 00:24:11.220 "ddgst": false 00:24:11.220 }, 00:24:11.220 "method": "bdev_nvme_attach_controller" 00:24:11.220 },{ 00:24:11.220 "params": { 00:24:11.220 "name": "Nvme1", 00:24:11.220 "trtype": "tcp", 00:24:11.220 "traddr": "10.0.0.2", 00:24:11.220 "adrfam": "ipv4", 00:24:11.220 "trsvcid": "4420", 00:24:11.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.220 "hdgst": false, 00:24:11.220 "ddgst": false 00:24:11.220 }, 00:24:11.220 "method": "bdev_nvme_attach_controller" 00:24:11.220 }' 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:11.220 15:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.220 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.220 ... 00:24:11.220 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.220 ... 00:24:11.220 fio-3.35 00:24:11.220 Starting 4 threads 00:24:16.500 00:24:16.500 filename0: (groupid=0, jobs=1): err= 0: pid=98413: Fri Jul 12 15:03:54 2024 00:24:16.500 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5002msec) 00:24:16.500 slat (nsec): min=5462, max=72813, avg=16299.29, stdev=4292.84 00:24:16.500 clat (usec): min=1730, max=8881, avg=4165.30, stdev=322.35 00:24:16.500 lat (usec): min=1743, max=8890, avg=4181.60, stdev=322.26 00:24:16.500 clat percentiles (usec): 00:24:16.500 | 1.00th=[ 3949], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4015], 00:24:16.500 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:24:16.500 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4686], 00:24:16.500 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 8029], 00:24:16.500 | 99.99th=[ 8848] 00:24:16.500 bw ( KiB/s): min=14592, max=15488, per=25.01%, avg=15093.11, stdev=423.22, samples=9 00:24:16.500 iops : min= 1824, max= 1936, avg=1886.56, stdev=52.87, samples=9 00:24:16.500 lat (msec) : 2=0.03%, 4=8.53%, 10=91.44% 00:24:16.500 cpu : usr=93.24%, sys=5.50%, ctx=7, majf=0, minf=9 00:24:16.500 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.500 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.500 issued rwts: total=9424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.500 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:16.500 filename0: (groupid=0, jobs=1): err= 0: pid=98414: Fri Jul 12 15:03:54 2024 00:24:16.500 read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.7MiB/5002msec) 00:24:16.500 slat (nsec): min=5572, max=61160, avg=15131.39, stdev=6200.96 00:24:16.500 clat (usec): min=2611, max=6718, avg=4170.73, stdev=266.95 00:24:16.500 lat (usec): min=2619, max=6734, avg=4185.86, stdev=266.24 00:24:16.500 clat percentiles (usec): 00:24:16.500 | 1.00th=[ 3916], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4047], 00:24:16.501 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:24:16.501 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4686], 00:24:16.501 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6194], 99.95th=[ 6521], 00:24:16.501 | 99.99th=[ 6718] 00:24:16.501 bw ( KiB/s): min=14592, max=15488, per=25.04%, avg=15109.33, stdev=422.41, samples=9 00:24:16.501 iops : min= 1824, max= 1936, avg=1888.67, stdev=52.80, samples=9 00:24:16.501 lat (msec) : 4=9.04%, 10=90.96% 00:24:16.501 cpu : usr=93.80%, sys=4.78%, ctx=7, majf=0, minf=9 00:24:16.501 IO depths : 1=11.7%, 2=24.4%, 4=50.6%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 issued rwts: total=9435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.501 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:16.501 filename1: (groupid=0, jobs=1): err= 0: pid=98415: Fri Jul 12 15:03:54 2024 00:24:16.501 read: IOPS=1889, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5004msec) 00:24:16.501 slat (nsec): min=6791, max=50888, avg=9559.70, stdev=3222.27 00:24:16.501 clat (usec): min=1346, max=6508, avg=4184.08, stdev=259.94 00:24:16.501 lat (usec): min=1361, max=6524, avg=4193.64, stdev=260.03 00:24:16.501 clat percentiles (usec): 00:24:16.501 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:24:16.501 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4113], 00:24:16.501 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4686], 00:24:16.501 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5669], 00:24:16.501 | 99.99th=[ 6521] 00:24:16.501 bw ( KiB/s): min=14592, max=15616, per=25.10%, avg=15146.67, stdev=414.77, samples=9 00:24:16.501 iops : min= 1824, max= 1952, avg=1893.33, stdev=51.85, samples=9 00:24:16.501 lat (msec) : 2=0.24%, 4=0.98%, 10=98.77% 00:24:16.501 cpu : usr=93.63%, sys=5.10%, ctx=10, majf=0, minf=0 00:24:16.501 IO depths : 1=11.1%, 2=25.0%, 4=50.0%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 issued rwts: total=9456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.501 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:16.501 filename1: (groupid=0, jobs=1): err= 0: pid=98416: Fri Jul 12 15:03:54 2024 00:24:16.501 read: IOPS=1885, BW=14.7MiB/s (15.4MB/s)(73.7MiB/5003msec) 00:24:16.501 slat (usec): min=7, max=177, avg=16.77, stdev= 4.91 00:24:16.501 clat (usec): min=2806, max=6696, avg=4161.61, stdev=268.83 00:24:16.501 lat (usec): min=2833, max=6711, avg=4178.38, stdev=268.75 00:24:16.501 clat percentiles (usec): 00:24:16.501 | 1.00th=[ 3916], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4015], 00:24:16.501 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:24:16.501 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4686], 00:24:16.501 | 99.00th=[ 5211], 99.50th=[ 5538], 99.90th=[ 6194], 99.95th=[ 6456], 00:24:16.501 | 99.99th=[ 6718] 00:24:16.501 bw ( KiB/s): min=14592, max=15488, per=25.03%, avg=15104.00, stdev=429.33, samples=9 00:24:16.501 iops : min= 1824, max= 1936, avg=1888.00, stdev=53.67, samples=9 00:24:16.501 lat (msec) : 4=9.79%, 10=90.21% 00:24:16.501 cpu : usr=92.70%, sys=5.64%, ctx=41, majf=0, minf=0 00:24:16.501 IO depths : 1=11.7%, 2=25.0%, 4=50.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.501 issued rwts: total=9432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.501 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:16.501 00:24:16.501 Run status group 0 (all jobs): 00:24:16.501 READ: bw=58.9MiB/s (61.8MB/s), 14.7MiB/s-14.8MiB/s (15.4MB/s-15.5MB/s), io=295MiB (309MB), run=5002-5004msec 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 ************************************ 00:24:16.501 END TEST fio_dif_rand_params 00:24:16.501 ************************************ 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 00:24:16.501 real 0m23.419s 00:24:16.501 user 2m5.378s 00:24:16.501 sys 0m5.718s 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:16.501 15:03:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:16.501 15:03:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:16.501 15:03:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 ************************************ 00:24:16.501 START TEST fio_dif_digest 00:24:16.501 ************************************ 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 bdev_null0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.501 [2024-07-12 15:03:54.700610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:16.501 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:16.501 { 00:24:16.501 "params": { 00:24:16.501 "name": "Nvme$subsystem", 00:24:16.501 "trtype": "$TEST_TRANSPORT", 00:24:16.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:16.501 "adrfam": "ipv4", 00:24:16.501 "trsvcid": "$NVMF_PORT", 00:24:16.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:16.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:16.501 "hdgst": ${hdgst:-false}, 00:24:16.501 "ddgst": ${ddgst:-false} 00:24:16.501 }, 00:24:16.501 "method": "bdev_nvme_attach_controller" 00:24:16.501 } 00:24:16.501 EOF 00:24:16.501 )") 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:16.502 "params": { 00:24:16.502 "name": "Nvme0", 00:24:16.502 "trtype": "tcp", 00:24:16.502 "traddr": "10.0.0.2", 00:24:16.502 "adrfam": "ipv4", 00:24:16.502 "trsvcid": "4420", 00:24:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.502 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.502 "hdgst": true, 00:24:16.502 "ddgst": true 00:24:16.502 }, 00:24:16.502 "method": "bdev_nvme_attach_controller" 00:24:16.502 }' 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:16.502 15:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.502 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:16.502 ... 00:24:16.502 fio-3.35 00:24:16.502 Starting 3 threads 00:24:28.694 00:24:28.694 filename0: (groupid=0, jobs=1): err= 0: pid=98518: Fri Jul 12 15:04:05 2024 00:24:28.694 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(243MiB/10008msec) 00:24:28.694 slat (nsec): min=8094, max=60995, avg=15949.56, stdev=6168.02 00:24:28.694 clat (usec): min=8933, max=28014, avg=15398.35, stdev=2396.72 00:24:28.694 lat (usec): min=8942, max=28047, avg=15414.30, stdev=2397.47 00:24:28.694 clat percentiles (usec): 00:24:28.694 | 1.00th=[11731], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:24:28.694 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14877], 60.00th=[15270], 00:24:28.694 | 70.00th=[15795], 80.00th=[16581], 90.00th=[17957], 95.00th=[19530], 00:24:28.694 | 99.00th=[25035], 99.50th=[26346], 99.90th=[27132], 99.95th=[27919], 00:24:28.694 | 99.99th=[27919] 00:24:28.694 bw ( KiB/s): min=20992, max=27136, per=34.11%, avg=24885.55, stdev=1771.37, samples=20 00:24:28.694 iops : min= 164, max= 212, avg=194.40, stdev=13.85, samples=20 00:24:28.694 lat (msec) : 10=0.26%, 20=95.27%, 50=4.47% 00:24:28.694 cpu : usr=92.19%, sys=6.30%, ctx=30, majf=0, minf=9 00:24:28.694 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.694 filename0: (groupid=0, jobs=1): err= 0: pid=98519: Fri Jul 12 15:04:05 2024 00:24:28.694 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(272MiB/10007msec) 00:24:28.694 slat (nsec): min=4700, max=70073, avg=17034.72, stdev=5731.62 00:24:28.694 clat (usec): min=8380, max=61592, avg=13791.99, stdev=3345.48 00:24:28.694 lat (usec): min=8415, max=61626, avg=13809.02, stdev=3346.31 00:24:28.694 clat percentiles (usec): 00:24:28.694 | 1.00th=[10552], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:24:28.694 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:24:28.694 | 70.00th=[13698], 80.00th=[14353], 90.00th=[16450], 95.00th=[18220], 00:24:28.694 | 99.00th=[24511], 99.50th=[25297], 99.90th=[58983], 99.95th=[61080], 00:24:28.694 | 99.99th=[61604] 00:24:28.694 bw ( KiB/s): min=23296, max=32000, per=38.09%, avg=27788.80, stdev=2410.77, samples=20 00:24:28.694 iops : min= 182, max= 250, avg=217.10, stdev=18.83, samples=20 00:24:28.694 lat (msec) : 10=0.32%, 20=96.18%, 50=3.22%, 100=0.28% 00:24:28.694 cpu : usr=91.59%, sys=6.72%, ctx=11, majf=0, minf=0 00:24:28.694 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.694 filename0: (groupid=0, jobs=1): err= 0: pid=98520: Fri Jul 12 15:04:05 2024 00:24:28.694 read: IOPS=158, BW=19.8MiB/s (20.7MB/s)(198MiB/10009msec) 00:24:28.694 slat (nsec): min=5016, max=77018, avg=17778.10, stdev=6771.70 00:24:28.694 clat (usec): min=9002, max=30754, avg=18928.59, stdev=2477.47 00:24:28.694 lat (usec): min=9025, max=30778, avg=18946.37, stdev=2477.15 00:24:28.694 clat percentiles (usec): 00:24:28.694 | 1.00th=[15270], 5.00th=[16319], 10.00th=[16712], 20.00th=[17171], 00:24:28.694 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:24:28.694 | 70.00th=[19268], 80.00th=[19792], 90.00th=[21890], 95.00th=[24249], 00:24:28.694 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:24:28.694 | 99.99th=[30802] 00:24:28.694 bw ( KiB/s): min=17152, max=21760, per=27.76%, avg=20249.60, stdev=1126.34, samples=20 00:24:28.694 iops : min= 134, max= 170, avg=158.20, stdev= 8.80, samples=20 00:24:28.694 lat (msec) : 10=0.06%, 20=81.06%, 50=18.88% 00:24:28.694 cpu : usr=91.94%, sys=6.50%, ctx=86, majf=0, minf=9 00:24:28.694 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.694 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.694 00:24:28.694 Run status group 0 (all jobs): 00:24:28.694 READ: bw=71.2MiB/s (74.7MB/s), 19.8MiB/s-27.1MiB/s (20.7MB/s-28.5MB/s), io=713MiB (748MB), run=10007-10009msec 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.694 00:24:28.694 real 0m10.843s 00:24:28.694 user 0m28.131s 00:24:28.694 sys 0m2.164s 00:24:28.694 ************************************ 00:24:28.694 END TEST fio_dif_digest 00:24:28.694 ************************************ 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.694 15:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:28.694 15:04:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:28.694 15:04:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.694 rmmod nvme_tcp 00:24:28.694 rmmod nvme_fabrics 00:24:28.694 rmmod nvme_keyring 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97769 ']' 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97769 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97769 ']' 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97769 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97769 00:24:28.694 killing process with pid 97769 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97769' 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97769 00:24:28.694 15:04:05 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97769 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:28.694 15:04:05 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.694 Waiting for block devices as requested 00:24:28.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.694 15:04:06 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.694 15:04:06 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.694 15:04:06 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.694 15:04:06 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.694 15:04:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.695 15:04:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.695 15:04:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.695 15:04:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:28.695 00:24:28.695 real 0m59.381s 00:24:28.695 user 3m49.733s 00:24:28.695 sys 0m15.454s 00:24:28.695 15:04:06 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.695 15:04:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.695 ************************************ 00:24:28.695 END TEST nvmf_dif 00:24:28.695 ************************************ 00:24:28.695 15:04:06 -- common/autotest_common.sh@1142 -- # return 0 00:24:28.695 15:04:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.695 15:04:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:28.695 15:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.695 15:04:06 -- common/autotest_common.sh@10 -- # set +x 00:24:28.695 ************************************ 00:24:28.695 START TEST nvmf_abort_qd_sizes 00:24:28.695 ************************************ 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.695 * Looking for test storage... 00:24:28.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:28.695 Cannot find device "nvmf_tgt_br" 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.695 Cannot find device "nvmf_tgt_br2" 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:28.695 Cannot find device "nvmf_tgt_br" 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:28.695 Cannot find device "nvmf_tgt_br2" 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.695 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:28.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:24:28.696 00:24:28.696 --- 10.0.0.2 ping statistics --- 00:24:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.696 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:28.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:24:28.696 00:24:28.696 --- 10.0.0.3 ping statistics --- 00:24:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.696 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:28.696 00:24:28.696 --- 10.0.0.1 ping statistics --- 00:24:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.696 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:28.696 15:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:28.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.954 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:28.954 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99101 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99101 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99101 ']' 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.213 15:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.213 [2024-07-12 15:04:07.731826] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:24:29.213 [2024-07-12 15:04:07.731920] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.472 [2024-07-12 15:04:07.868754] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.472 [2024-07-12 15:04:07.936078] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.472 [2024-07-12 15:04:07.936392] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.472 [2024-07-12 15:04:07.936561] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.472 [2024-07-12 15:04:07.936709] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.472 [2024-07-12 15:04:07.936746] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.472 [2024-07-12 15:04:07.936935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.472 [2024-07-12 15:04:07.936998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.472 [2024-07-12 15:04:07.937163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.472 [2024-07-12 15:04:07.937061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:30.406 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.407 15:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 ************************************ 00:24:30.407 START TEST spdk_target_abort 00:24:30.407 ************************************ 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 spdk_targetn1 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.407 15:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 [2024-07-12 15:04:09.000158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 [2024-07-12 15:04:09.036221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.407 15:04:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.685 Initializing NVMe Controllers 00:24:33.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:33.685 Initialization complete. Launching workers. 00:24:33.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11183, failed: 0 00:24:33.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1042, failed to submit 10141 00:24:33.685 success 802, unsuccess 240, failed 0 00:24:33.685 15:04:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.685 15:04:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.886 Initializing NVMe Controllers 00:24:37.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:37.886 Initialization complete. Launching workers. 00:24:37.886 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5911, failed: 0 00:24:37.886 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 4703 00:24:37.886 success 286, unsuccess 922, failed 0 00:24:37.886 15:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:37.886 15:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.411 Initializing NVMe Controllers 00:24:40.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:40.411 Initialization complete. Launching workers. 00:24:40.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28258, failed: 0 00:24:40.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2567, failed to submit 25691 00:24:40.411 success 281, unsuccess 2286, failed 0 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.411 15:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99101 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99101 ']' 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99101 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99101 00:24:41.345 killing process with pid 99101 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99101' 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99101 00:24:41.345 15:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99101 00:24:41.603 ************************************ 00:24:41.604 END TEST spdk_target_abort 00:24:41.604 ************************************ 00:24:41.604 00:24:41.604 real 0m11.103s 00:24:41.604 user 0m45.699s 00:24:41.604 sys 0m1.816s 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:41.604 15:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:41.604 15:04:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:41.604 15:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:41.604 15:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.604 15:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:41.604 ************************************ 00:24:41.604 START TEST kernel_target_abort 00:24:41.604 ************************************ 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:41.604 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:41.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:41.862 Waiting for block devices as requested 00:24:41.862 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:42.120 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.120 No valid GPT data, bailing 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:42.120 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:42.378 No valid GPT data, bailing 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:42.378 No valid GPT data, bailing 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:42.378 No valid GPT data, bailing 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.378 15:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c --hostid=de52f4f4-a532-4973-82be-b4690c1d5f3c -a 10.0.0.1 -t tcp -s 4420 00:24:42.378 00:24:42.378 Discovery Log Number of Records 2, Generation counter 2 00:24:42.378 =====Discovery Log Entry 0====== 00:24:42.378 trtype: tcp 00:24:42.378 adrfam: ipv4 00:24:42.378 subtype: current discovery subsystem 00:24:42.378 treq: not specified, sq flow control disable supported 00:24:42.378 portid: 1 00:24:42.378 trsvcid: 4420 00:24:42.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.378 traddr: 10.0.0.1 00:24:42.378 eflags: none 00:24:42.378 sectype: none 00:24:42.378 =====Discovery Log Entry 1====== 00:24:42.378 trtype: tcp 00:24:42.378 adrfam: ipv4 00:24:42.378 subtype: nvme subsystem 00:24:42.378 treq: not specified, sq flow control disable supported 00:24:42.378 portid: 1 00:24:42.378 trsvcid: 4420 00:24:42.378 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:42.378 traddr: 10.0.0.1 00:24:42.378 eflags: none 00:24:42.378 sectype: none 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:42.378 15:04:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.661 Initializing NVMe Controllers 00:24:45.661 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.661 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:45.661 Initialization complete. Launching workers. 00:24:45.661 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34477, failed: 0 00:24:45.661 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34477, failed to submit 0 00:24:45.661 success 0, unsuccess 34477, failed 0 00:24:45.661 15:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:45.661 15:04:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:48.945 Initializing NVMe Controllers 00:24:48.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:48.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:48.945 Initialization complete. Launching workers. 00:24:48.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65326, failed: 0 00:24:48.945 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27615, failed to submit 37711 00:24:48.945 success 0, unsuccess 27615, failed 0 00:24:48.945 15:04:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:48.945 15:04:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.228 Initializing NVMe Controllers 00:24:52.228 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.228 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.228 Initialization complete. Launching workers. 00:24:52.228 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77597, failed: 0 00:24:52.228 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19384, failed to submit 58213 00:24:52.228 success 0, unsuccess 19384, failed 0 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:52.228 15:04:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:52.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:54.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:54.688 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:54.688 00:24:54.688 real 0m13.028s 00:24:54.688 user 0m6.292s 00:24:54.688 sys 0m4.091s 00:24:54.688 15:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.688 ************************************ 00:24:54.688 END TEST kernel_target_abort 00:24:54.688 ************************************ 00:24:54.688 15:04:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.688 rmmod nvme_tcp 00:24:54.688 rmmod nvme_fabrics 00:24:54.688 rmmod nvme_keyring 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99101 ']' 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99101 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99101 ']' 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99101 00:24:54.688 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99101) - No such process 00:24:54.688 Process with pid 99101 is not found 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99101 is not found' 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:54.688 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:54.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:54.945 Waiting for block devices as requested 00:24:54.945 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:55.203 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:55.203 00:24:55.203 real 0m27.370s 00:24:55.203 user 0m53.231s 00:24:55.203 sys 0m7.132s 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.203 15:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:55.203 ************************************ 00:24:55.203 END TEST nvmf_abort_qd_sizes 00:24:55.203 ************************************ 00:24:55.203 15:04:33 -- common/autotest_common.sh@1142 -- # return 0 00:24:55.203 15:04:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:55.203 15:04:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.203 15:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.203 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:24:55.203 ************************************ 00:24:55.203 START TEST keyring_file 00:24:55.203 ************************************ 00:24:55.203 15:04:33 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:55.461 * Looking for test storage... 00:24:55.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.461 15:04:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.461 15:04:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.461 15:04:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.461 15:04:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.461 15:04:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.461 15:04:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.461 15:04:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:55.461 15:04:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DL5yk5ySAA 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:55.461 15:04:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DL5yk5ySAA 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DL5yk5ySAA 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DL5yk5ySAA 00:24:55.461 15:04:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:55.461 15:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:55.461 15:04:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:55.461 15:04:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4mHzmmfZFZ 00:24:55.461 15:04:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:55.461 15:04:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:55.461 15:04:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4mHzmmfZFZ 00:24:55.461 15:04:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4mHzmmfZFZ 00:24:55.461 15:04:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.4mHzmmfZFZ 00:24:55.461 15:04:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=99976 00:24:55.461 15:04:34 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:55.461 15:04:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99976 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99976 ']' 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.461 15:04:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:55.719 [2024-07-12 15:04:34.144721] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:24:55.719 [2024-07-12 15:04:34.144861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99976 ] 00:24:55.719 [2024-07-12 15:04:34.291351] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.719 [2024-07-12 15:04:34.362590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:56.654 15:04:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:56.654 [2024-07-12 15:04:35.054673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.654 null0 00:24:56.654 [2024-07-12 15:04:35.086611] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.654 [2024-07-12 15:04:35.086883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:56.654 [2024-07-12 15:04:35.094609] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.654 15:04:35 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:56.654 [2024-07-12 15:04:35.106607] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:56.654 request: 00:24:56.654 { 00:24:56.654 2024/07/12 15:04:35 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:24:56.654 "method": "nvmf_subsystem_add_listener", 00:24:56.654 "params": { 00:24:56.654 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:56.654 "secure_channel": false, 00:24:56.654 "listen_address": { 00:24:56.654 "trtype": "tcp", 00:24:56.654 "traddr": "127.0.0.1", 00:24:56.654 "trsvcid": "4420" 00:24:56.654 } 00:24:56.654 } 00:24:56.654 } 00:24:56.654 Got JSON-RPC error response 00:24:56.654 GoRPCClient: error on JSON-RPC call 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.654 15:04:35 keyring_file -- keyring/file.sh@46 -- # bperfpid=100011 00:24:56.654 15:04:35 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:56.654 15:04:35 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100011 /var/tmp/bperf.sock 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100011 ']' 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.654 15:04:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:56.654 [2024-07-12 15:04:35.171370] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:24:56.654 [2024-07-12 15:04:35.171705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100011 ] 00:24:56.913 [2024-07-12 15:04:35.310386] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.913 [2024-07-12 15:04:35.369890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.480 15:04:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.480 15:04:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:57.480 15:04:36 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:24:57.480 15:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:24:58.046 15:04:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4mHzmmfZFZ 00:24:58.046 15:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4mHzmmfZFZ 00:24:58.304 15:04:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:58.304 15:04:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:58.304 15:04:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.304 15:04:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.304 15:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.562 15:04:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DL5yk5ySAA == \/\t\m\p\/\t\m\p\.\D\L\5\y\k\5\y\S\A\A ]] 00:24:58.562 15:04:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:58.562 15:04:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:58.562 15:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.562 15:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.562 15:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.129 15:04:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4mHzmmfZFZ == \/\t\m\p\/\t\m\p\.\4\m\H\z\m\m\f\Z\F\Z ]] 00:24:59.129 15:04:37 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:59.129 15:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.129 15:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.129 15:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.129 15:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.129 15:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:59.387 15:04:37 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:59.387 15:04:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:59.387 15:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:59.387 15:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.387 15:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.387 15:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.387 15:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.646 15:04:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:59.646 15:04:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:59.646 15:04:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:59.904 [2024-07-12 15:04:38.407982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.904 nvme0n1 00:24:59.904 15:04:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:59.904 15:04:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.904 15:04:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.904 15:04:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.904 15:04:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.904 15:04:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.163 15:04:38 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:00.163 15:04:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:00.163 15:04:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:00.163 15:04:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.163 15:04:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.163 15:04:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.163 15:04:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.729 15:04:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:00.729 15:04:39 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.729 Running I/O for 1 seconds... 00:25:01.767 00:25:01.767 Latency(us) 00:25:01.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.767 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:01.767 nvme0n1 : 1.01 11091.56 43.33 0.00 0.00 11501.97 3530.01 16801.05 00:25:01.767 =================================================================================================================== 00:25:01.767 Total : 11091.56 43.33 0.00 0.00 11501.97 3530.01 16801.05 00:25:01.767 0 00:25:01.767 15:04:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:01.767 15:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:02.025 15:04:40 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:02.025 15:04:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.025 15:04:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.025 15:04:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.025 15:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.025 15:04:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.284 15:04:40 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:02.284 15:04:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:02.284 15:04:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:02.284 15:04:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.542 15:04:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.542 15:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.542 15:04:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:02.800 15:04:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:02.800 15:04:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.800 15:04:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.800 15:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.059 [2024-07-12 15:04:41.514180] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:03.059 [2024-07-12 15:04:41.514822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f60e0 (107): Transport endpoint is not connected 00:25:03.059 [2024-07-12 15:04:41.515807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f60e0 (9): Bad file descriptor 00:25:03.059 [2024-07-12 15:04:41.516803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:03.059 [2024-07-12 15:04:41.516828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:03.059 [2024-07-12 15:04:41.516839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:03.059 2024/07/12 15:04:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:03.059 request: 00:25:03.059 { 00:25:03.059 "method": "bdev_nvme_attach_controller", 00:25:03.059 "params": { 00:25:03.059 "name": "nvme0", 00:25:03.059 "trtype": "tcp", 00:25:03.059 "traddr": "127.0.0.1", 00:25:03.059 "adrfam": "ipv4", 00:25:03.059 "trsvcid": "4420", 00:25:03.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:03.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:03.059 "prchk_reftag": false, 00:25:03.059 "prchk_guard": false, 00:25:03.059 "hdgst": false, 00:25:03.059 "ddgst": false, 00:25:03.059 "psk": "key1" 00:25:03.059 } 00:25:03.059 } 00:25:03.059 Got JSON-RPC error response 00:25:03.059 GoRPCClient: error on JSON-RPC call 00:25:03.059 15:04:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:03.059 15:04:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:03.059 15:04:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:03.059 15:04:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:03.059 15:04:41 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:03.059 15:04:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.059 15:04:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.059 15:04:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.059 15:04:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.059 15:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.318 15:04:41 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:03.318 15:04:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:03.318 15:04:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.318 15:04:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.318 15:04:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.318 15:04:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.318 15:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.576 15:04:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:03.576 15:04:42 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:03.576 15:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:03.834 15:04:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:03.834 15:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:04.092 15:04:42 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:04.092 15:04:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:04.092 15:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.350 15:04:43 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:25:04.350 15:04:43 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DL5yk5ySAA 00:25:04.608 15:04:43 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.608 15:04:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:04.608 15:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:04.867 [2024-07-12 15:04:43.279082] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DL5yk5ySAA': 0100660 00:25:04.867 [2024-07-12 15:04:43.279145] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:04.867 2024/07/12 15:04:43 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.DL5yk5ySAA], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:25:04.867 request: 00:25:04.867 { 00:25:04.867 "method": "keyring_file_add_key", 00:25:04.867 "params": { 00:25:04.867 "name": "key0", 00:25:04.867 "path": "/tmp/tmp.DL5yk5ySAA" 00:25:04.867 } 00:25:04.867 } 00:25:04.867 Got JSON-RPC error response 00:25:04.867 GoRPCClient: error on JSON-RPC call 00:25:04.867 15:04:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:04.867 15:04:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:04.867 15:04:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:04.867 15:04:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:04.867 15:04:43 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DL5yk5ySAA 00:25:04.867 15:04:43 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:04.867 15:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DL5yk5ySAA 00:25:05.126 15:04:43 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DL5yk5ySAA 00:25:05.126 15:04:43 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:25:05.126 15:04:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.126 15:04:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.126 15:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.126 15:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.126 15:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.385 15:04:43 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:25:05.385 15:04:43 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.385 15:04:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.385 15:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.643 [2024-07-12 15:04:44.279293] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DL5yk5ySAA': No such file or directory 00:25:05.643 [2024-07-12 15:04:44.279354] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:05.643 [2024-07-12 15:04:44.279380] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:05.643 [2024-07-12 15:04:44.279390] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.643 [2024-07-12 15:04:44.279399] bdev_nvme.c:6276:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:05.643 2024/07/12 15:04:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:25:05.643 request: 00:25:05.643 { 00:25:05.643 "method": "bdev_nvme_attach_controller", 00:25:05.643 "params": { 00:25:05.643 "name": "nvme0", 00:25:05.643 "trtype": "tcp", 00:25:05.643 "traddr": "127.0.0.1", 00:25:05.643 "adrfam": "ipv4", 00:25:05.643 "trsvcid": "4420", 00:25:05.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.643 "prchk_reftag": false, 00:25:05.643 "prchk_guard": false, 00:25:05.643 "hdgst": false, 00:25:05.643 "ddgst": false, 00:25:05.643 "psk": "key0" 00:25:05.643 } 00:25:05.643 } 00:25:05.643 Got JSON-RPC error response 00:25:05.643 GoRPCClient: error on JSON-RPC call 00:25:05.901 15:04:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:05.901 15:04:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.901 15:04:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.901 15:04:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.901 15:04:44 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:25:05.901 15:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:06.159 15:04:44 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5FxjYpYigd 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:06.159 15:04:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5FxjYpYigd 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5FxjYpYigd 00:25:06.159 15:04:44 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.5FxjYpYigd 00:25:06.159 15:04:44 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FxjYpYigd 00:25:06.159 15:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FxjYpYigd 00:25:06.419 15:04:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.419 15:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.985 nvme0n1 00:25:06.985 15:04:45 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:25:06.985 15:04:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.985 15:04:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.985 15:04:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.985 15:04:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.985 15:04:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.243 15:04:45 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:25:07.243 15:04:45 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:25:07.243 15:04:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.809 15:04:46 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:25:07.809 15:04:46 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:25:07.809 15:04:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.809 15:04:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.809 15:04:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.067 15:04:46 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:25:08.067 15:04:46 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:25:08.067 15:04:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:08.067 15:04:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:08.067 15:04:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.067 15:04:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.067 15:04:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.325 15:04:46 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:25:08.325 15:04:46 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:08.325 15:04:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:08.583 15:04:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:25:08.583 15:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.583 15:04:47 keyring_file -- keyring/file.sh@104 -- # jq length 00:25:08.842 15:04:47 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:25:08.842 15:04:47 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FxjYpYigd 00:25:08.842 15:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FxjYpYigd 00:25:09.101 15:04:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4mHzmmfZFZ 00:25:09.101 15:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4mHzmmfZFZ 00:25:09.358 15:04:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.358 15:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.617 nvme0n1 00:25:09.617 15:04:48 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:25:09.617 15:04:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:10.185 15:04:48 keyring_file -- keyring/file.sh@112 -- # config='{ 00:25:10.185 "subsystems": [ 00:25:10.185 { 00:25:10.185 "subsystem": "keyring", 00:25:10.185 "config": [ 00:25:10.185 { 00:25:10.185 "method": "keyring_file_add_key", 00:25:10.185 "params": { 00:25:10.185 "name": "key0", 00:25:10.185 "path": "/tmp/tmp.5FxjYpYigd" 00:25:10.185 } 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "method": "keyring_file_add_key", 00:25:10.185 "params": { 00:25:10.185 "name": "key1", 00:25:10.185 "path": "/tmp/tmp.4mHzmmfZFZ" 00:25:10.185 } 00:25:10.185 } 00:25:10.185 ] 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "subsystem": "iobuf", 00:25:10.185 "config": [ 00:25:10.185 { 00:25:10.185 "method": "iobuf_set_options", 00:25:10.185 "params": { 00:25:10.185 "large_bufsize": 135168, 00:25:10.185 "large_pool_count": 1024, 00:25:10.185 "small_bufsize": 8192, 00:25:10.185 "small_pool_count": 8192 00:25:10.185 } 00:25:10.185 } 00:25:10.185 ] 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "subsystem": "sock", 00:25:10.185 "config": [ 00:25:10.185 { 00:25:10.185 "method": "sock_set_default_impl", 00:25:10.185 "params": { 00:25:10.185 "impl_name": "posix" 00:25:10.185 } 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "method": "sock_impl_set_options", 00:25:10.185 "params": { 00:25:10.185 "enable_ktls": false, 00:25:10.185 "enable_placement_id": 0, 00:25:10.185 "enable_quickack": false, 00:25:10.185 "enable_recv_pipe": true, 00:25:10.185 "enable_zerocopy_send_client": false, 00:25:10.185 "enable_zerocopy_send_server": true, 00:25:10.185 "impl_name": "ssl", 00:25:10.185 "recv_buf_size": 4096, 00:25:10.185 "send_buf_size": 4096, 00:25:10.185 "tls_version": 0, 00:25:10.185 "zerocopy_threshold": 0 00:25:10.185 } 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "method": "sock_impl_set_options", 00:25:10.185 "params": { 00:25:10.185 "enable_ktls": false, 00:25:10.185 "enable_placement_id": 0, 00:25:10.185 "enable_quickack": false, 00:25:10.185 "enable_recv_pipe": true, 00:25:10.185 "enable_zerocopy_send_client": false, 00:25:10.185 "enable_zerocopy_send_server": true, 00:25:10.185 "impl_name": "posix", 00:25:10.185 "recv_buf_size": 2097152, 00:25:10.185 "send_buf_size": 2097152, 00:25:10.185 "tls_version": 0, 00:25:10.185 "zerocopy_threshold": 0 00:25:10.185 } 00:25:10.185 } 00:25:10.185 ] 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "subsystem": "vmd", 00:25:10.185 "config": [] 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "subsystem": "accel", 00:25:10.186 "config": [ 00:25:10.186 { 00:25:10.186 "method": "accel_set_options", 00:25:10.186 "params": { 00:25:10.186 "buf_count": 2048, 00:25:10.186 "large_cache_size": 16, 00:25:10.186 "sequence_count": 2048, 00:25:10.186 "small_cache_size": 128, 00:25:10.186 "task_count": 2048 00:25:10.186 } 00:25:10.186 } 00:25:10.186 ] 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "subsystem": "bdev", 00:25:10.186 "config": [ 00:25:10.186 { 00:25:10.186 "method": "bdev_set_options", 00:25:10.186 "params": { 00:25:10.186 "bdev_auto_examine": true, 00:25:10.186 "bdev_io_cache_size": 256, 00:25:10.186 "bdev_io_pool_size": 65535, 00:25:10.186 "iobuf_large_cache_size": 16, 00:25:10.186 "iobuf_small_cache_size": 128 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_raid_set_options", 00:25:10.186 "params": { 00:25:10.186 "process_window_size_kb": 1024 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_iscsi_set_options", 00:25:10.186 "params": { 00:25:10.186 "timeout_sec": 30 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_nvme_set_options", 00:25:10.186 "params": { 00:25:10.186 "action_on_timeout": "none", 00:25:10.186 "allow_accel_sequence": false, 00:25:10.186 "arbitration_burst": 0, 00:25:10.186 "bdev_retry_count": 3, 00:25:10.186 "ctrlr_loss_timeout_sec": 0, 00:25:10.186 "delay_cmd_submit": true, 00:25:10.186 "dhchap_dhgroups": [ 00:25:10.186 "null", 00:25:10.186 "ffdhe2048", 00:25:10.186 "ffdhe3072", 00:25:10.186 "ffdhe4096", 00:25:10.186 "ffdhe6144", 00:25:10.186 "ffdhe8192" 00:25:10.186 ], 00:25:10.186 "dhchap_digests": [ 00:25:10.186 "sha256", 00:25:10.186 "sha384", 00:25:10.186 "sha512" 00:25:10.186 ], 00:25:10.186 "disable_auto_failback": false, 00:25:10.186 "fast_io_fail_timeout_sec": 0, 00:25:10.186 "generate_uuids": false, 00:25:10.186 "high_priority_weight": 0, 00:25:10.186 "io_path_stat": false, 00:25:10.186 "io_queue_requests": 512, 00:25:10.186 "keep_alive_timeout_ms": 10000, 00:25:10.186 "low_priority_weight": 0, 00:25:10.186 "medium_priority_weight": 0, 00:25:10.186 "nvme_adminq_poll_period_us": 10000, 00:25:10.186 "nvme_error_stat": false, 00:25:10.186 "nvme_ioq_poll_period_us": 0, 00:25:10.186 "rdma_cm_event_timeout_ms": 0, 00:25:10.186 "rdma_max_cq_size": 0, 00:25:10.186 "rdma_srq_size": 0, 00:25:10.186 "reconnect_delay_sec": 0, 00:25:10.186 "timeout_admin_us": 0, 00:25:10.186 "timeout_us": 0, 00:25:10.186 "transport_ack_timeout": 0, 00:25:10.186 "transport_retry_count": 4, 00:25:10.186 "transport_tos": 0 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_nvme_attach_controller", 00:25:10.186 "params": { 00:25:10.186 "adrfam": "IPv4", 00:25:10.186 "ctrlr_loss_timeout_sec": 0, 00:25:10.186 "ddgst": false, 00:25:10.186 "fast_io_fail_timeout_sec": 0, 00:25:10.186 "hdgst": false, 00:25:10.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.186 "name": "nvme0", 00:25:10.186 "prchk_guard": false, 00:25:10.186 "prchk_reftag": false, 00:25:10.186 "psk": "key0", 00:25:10.186 "reconnect_delay_sec": 0, 00:25:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.186 "traddr": "127.0.0.1", 00:25:10.186 "trsvcid": "4420", 00:25:10.186 "trtype": "TCP" 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_nvme_set_hotplug", 00:25:10.186 "params": { 00:25:10.186 "enable": false, 00:25:10.186 "period_us": 100000 00:25:10.186 } 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "method": "bdev_wait_for_examine" 00:25:10.186 } 00:25:10.186 ] 00:25:10.186 }, 00:25:10.186 { 00:25:10.186 "subsystem": "nbd", 00:25:10.186 "config": [] 00:25:10.186 } 00:25:10.186 ] 00:25:10.186 }' 00:25:10.186 15:04:48 keyring_file -- keyring/file.sh@114 -- # killprocess 100011 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100011 ']' 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100011 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100011 00:25:10.186 killing process with pid 100011 00:25:10.186 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.186 00:25:10.186 Latency(us) 00:25:10.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.186 =================================================================================================================== 00:25:10.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100011' 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@967 -- # kill 100011 00:25:10.186 15:04:48 keyring_file -- common/autotest_common.sh@972 -- # wait 100011 00:25:10.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.446 15:04:48 keyring_file -- keyring/file.sh@117 -- # bperfpid=100494 00:25:10.446 15:04:48 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100494 /var/tmp/bperf.sock 00:25:10.446 15:04:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100494 ']' 00:25:10.446 15:04:48 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:10.446 15:04:48 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:25:10.446 "subsystems": [ 00:25:10.446 { 00:25:10.446 "subsystem": "keyring", 00:25:10.446 "config": [ 00:25:10.446 { 00:25:10.446 "method": "keyring_file_add_key", 00:25:10.446 "params": { 00:25:10.446 "name": "key0", 00:25:10.446 "path": "/tmp/tmp.5FxjYpYigd" 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "keyring_file_add_key", 00:25:10.446 "params": { 00:25:10.446 "name": "key1", 00:25:10.446 "path": "/tmp/tmp.4mHzmmfZFZ" 00:25:10.446 } 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "iobuf", 00:25:10.446 "config": [ 00:25:10.446 { 00:25:10.446 "method": "iobuf_set_options", 00:25:10.446 "params": { 00:25:10.446 "large_bufsize": 135168, 00:25:10.446 "large_pool_count": 1024, 00:25:10.446 "small_bufsize": 8192, 00:25:10.446 "small_pool_count": 8192 00:25:10.446 } 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "sock", 00:25:10.446 "config": [ 00:25:10.446 { 00:25:10.446 "method": "sock_set_default_impl", 00:25:10.446 "params": { 00:25:10.446 "impl_name": "posix" 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "sock_impl_set_options", 00:25:10.446 "params": { 00:25:10.446 "enable_ktls": false, 00:25:10.446 "enable_placement_id": 0, 00:25:10.446 "enable_quickack": false, 00:25:10.446 "enable_recv_pipe": true, 00:25:10.446 "enable_zerocopy_send_client": false, 00:25:10.446 "enable_zerocopy_send_server": true, 00:25:10.446 "impl_name": "ssl", 00:25:10.446 "recv_buf_size": 4096, 00:25:10.446 "send_buf_size": 4096, 00:25:10.446 "tls_version": 0, 00:25:10.446 "zerocopy_threshold": 0 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "sock_impl_set_options", 00:25:10.446 "params": { 00:25:10.446 "enable_ktls": false, 00:25:10.446 "enable_placement_id": 0, 00:25:10.446 "enable_quickack": false, 00:25:10.446 "enable_recv_pipe": true, 00:25:10.446 "enable_zerocopy_send_client": false, 00:25:10.446 "enable_zerocopy_send_server": true, 00:25:10.446 "impl_name": "posix", 00:25:10.446 "recv_buf_size": 2097152, 00:25:10.446 "send_buf_size": 2097152, 00:25:10.446 "tls_version": 0, 00:25:10.446 "zerocopy_threshold": 0 00:25:10.446 } 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "vmd", 00:25:10.446 "config": [] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "accel", 00:25:10.446 "config": [ 00:25:10.446 { 00:25:10.446 "method": "accel_set_options", 00:25:10.446 "params": { 00:25:10.446 "buf_count": 2048, 00:25:10.446 "large_cache_size": 16, 00:25:10.446 "sequence_count": 2048, 00:25:10.446 "small_cache_size": 128, 00:25:10.446 "task_count": 2048 00:25:10.446 } 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "bdev", 00:25:10.446 "config": [ 00:25:10.446 { 00:25:10.446 "method": "bdev_set_options", 00:25:10.446 "params": { 00:25:10.446 "bdev_auto_examine": true, 00:25:10.446 "bdev_io_cache_size": 256, 00:25:10.446 "bdev_io_pool_size": 65535, 00:25:10.446 "iobuf_large_cache_size": 16, 00:25:10.446 "iobuf_small_cache_size": 128 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_raid_set_options", 00:25:10.446 "params": { 00:25:10.446 "process_window_size_kb": 1024 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_iscsi_set_options", 00:25:10.446 "params": { 00:25:10.446 "timeout_sec": 30 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_nvme_set_options", 00:25:10.446 "params": { 00:25:10.446 "action_on_timeout": "none", 00:25:10.446 "allow_accel_sequence": false, 00:25:10.446 "arbitration_burst": 0, 00:25:10.446 "bdev_retry_count": 3, 00:25:10.446 "ctrlr_loss_timeout_sec": 0, 00:25:10.446 "delay_cmd_submit": true, 00:25:10.446 "dhchap_dhgroups": [ 00:25:10.446 "null", 00:25:10.446 "ffdhe2048", 00:25:10.446 "ffdhe3072", 00:25:10.446 "ffdhe4096", 00:25:10.446 "ffdhe6144", 00:25:10.446 "ffdhe8192" 00:25:10.446 ], 00:25:10.446 "dhchap_digests": [ 00:25:10.446 "sha256", 00:25:10.446 "sha384", 00:25:10.446 "sha512" 00:25:10.446 ], 00:25:10.446 "disable_auto_failback": false, 00:25:10.446 "fast_io_fail_timeout_sec": 0, 00:25:10.446 "generate_uuids": false, 00:25:10.446 "high_priority_weight": 0, 00:25:10.446 "io_path_stat": false, 00:25:10.446 "io_queue_requests": 512, 00:25:10.446 "keep_alive_timeout_ms": 10000, 00:25:10.446 "low_priority_weight": 0, 00:25:10.446 "medium_priority_weight": 0, 00:25:10.446 "nvme_adminq_poll_period_us": 10000, 00:25:10.446 " 15:04:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.446 nvme_error_stat": false, 00:25:10.446 "nvme_ioq_poll_period_us": 0, 00:25:10.446 "rdma_cm_event_timeout_ms": 0, 00:25:10.446 "rdma_max_cq_size": 0, 00:25:10.446 "rdma_srq_size": 0, 00:25:10.446 "reconnect_delay_sec": 0, 00:25:10.446 "timeout_admin_us": 0, 00:25:10.446 "timeout_us": 0, 00:25:10.446 "transport_ack_timeout": 0, 00:25:10.446 "transport_retry_count": 4, 00:25:10.446 "transport_tos": 0 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_nvme_attach_controller", 00:25:10.446 "params": { 00:25:10.446 "adrfam": "IPv4", 00:25:10.446 "ctrlr_loss_timeout_sec": 0, 00:25:10.446 "ddgst": false, 00:25:10.446 "fast_io_fail_timeout_sec": 0, 00:25:10.446 "hdgst": false, 00:25:10.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.446 "name": "nvme0", 00:25:10.446 "prchk_guard": false, 00:25:10.446 "prchk_reftag": false, 00:25:10.446 "psk": "key0", 00:25:10.446 "reconnect_delay_sec": 0, 00:25:10.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.446 "traddr": "127.0.0.1", 00:25:10.446 "trsvcid": "4420", 00:25:10.446 "trtype": "TCP" 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_nvme_set_hotplug", 00:25:10.446 "params": { 00:25:10.446 "enable": false, 00:25:10.446 "period_us": 100000 00:25:10.446 } 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "method": "bdev_wait_for_examine" 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }, 00:25:10.446 { 00:25:10.446 "subsystem": "nbd", 00:25:10.446 "config": [] 00:25:10.446 } 00:25:10.446 ] 00:25:10.446 }' 00:25:10.446 15:04:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.446 15:04:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.447 15:04:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.447 15:04:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:10.447 [2024-07-12 15:04:48.961804] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:25:10.447 [2024-07-12 15:04:48.962219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100494 ] 00:25:10.705 [2024-07-12 15:04:49.105597] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.705 [2024-07-12 15:04:49.164413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.705 [2024-07-12 15:04:49.304784] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.639 15:04:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.639 15:04:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:25:11.639 15:04:50 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:25:11.639 15:04:50 keyring_file -- keyring/file.sh@120 -- # jq length 00:25:11.639 15:04:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.897 15:04:50 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:25:11.897 15:04:50 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:25:11.897 15:04:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:11.897 15:04:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.897 15:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.897 15:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.897 15:04:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.461 15:04:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:12.461 15:04:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:25:12.461 15:04:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:12.461 15:04:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.461 15:04:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:12.461 15:04:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.461 15:04:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.717 15:04:51 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:25:12.717 15:04:51 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:25:12.717 15:04:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:12.717 15:04:51 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:25:12.974 15:04:51 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:25:12.974 15:04:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:12.974 15:04:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5FxjYpYigd /tmp/tmp.4mHzmmfZFZ 00:25:12.974 15:04:51 keyring_file -- keyring/file.sh@20 -- # killprocess 100494 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100494 ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100494 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100494 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:12.974 killing process with pid 100494 00:25:12.974 Received shutdown signal, test time was about 1.000000 seconds 00:25:12.974 00:25:12.974 Latency(us) 00:25:12.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.974 =================================================================================================================== 00:25:12.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100494' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@967 -- # kill 100494 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@972 -- # wait 100494 00:25:12.974 15:04:51 keyring_file -- keyring/file.sh@21 -- # killprocess 99976 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99976 ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99976 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@953 -- # uname 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99976 00:25:12.974 killing process with pid 99976 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99976' 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@967 -- # kill 99976 00:25:12.974 [2024-07-12 15:04:51.616846] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:12.974 15:04:51 keyring_file -- common/autotest_common.sh@972 -- # wait 99976 00:25:13.233 00:25:13.233 real 0m18.037s 00:25:13.233 user 0m46.476s 00:25:13.233 sys 0m3.167s 00:25:13.233 15:04:51 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.233 15:04:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:13.233 ************************************ 00:25:13.233 END TEST keyring_file 00:25:13.233 ************************************ 00:25:13.492 15:04:51 -- common/autotest_common.sh@1142 -- # return 0 00:25:13.492 15:04:51 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:25:13.492 15:04:51 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:13.492 15:04:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:13.492 15:04:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.492 15:04:51 -- common/autotest_common.sh@10 -- # set +x 00:25:13.492 ************************************ 00:25:13.492 START TEST keyring_linux 00:25:13.492 ************************************ 00:25:13.492 15:04:51 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:13.492 * Looking for test storage... 00:25:13.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:13.492 15:04:51 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:13.492 15:04:51 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.492 15:04:51 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de52f4f4-a532-4973-82be-b4690c1d5f3c 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=de52f4f4-a532-4973-82be-b4690c1d5f3c 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.492 15:04:52 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.492 15:04:52 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.492 15:04:52 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.492 15:04:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.492 15:04:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.492 15:04:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.492 15:04:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:13.492 15:04:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:13.492 /tmp/:spdk-test:key0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:13.492 15:04:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:13.492 /tmp/:spdk-test:key1 00:25:13.492 15:04:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100648 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.492 15:04:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100648 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100648 ']' 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.492 15:04:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:13.750 [2024-07-12 15:04:52.169103] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:25:13.750 [2024-07-12 15:04:52.169200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100648 ] 00:25:13.750 [2024-07-12 15:04:52.305212] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.750 [2024-07-12 15:04:52.364165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.007 [2024-07-12 15:04:52.540434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.007 null0 00:25:14.007 [2024-07-12 15:04:52.572414] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.007 [2024-07-12 15:04:52.572696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:14.007 666137786 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:14.007 914489949 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100671 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100671 /var/tmp/bperf.sock 00:25:14.007 15:04:52 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100671 ']' 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.007 15:04:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.007 [2024-07-12 15:04:52.646868] Starting SPDK v24.09-pre git sha1 7d88ad9b8 / DPDK 24.03.0 initialization... 00:25:14.007 [2024-07-12 15:04:52.646960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100671 ] 00:25:14.265 [2024-07-12 15:04:52.779506] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.265 [2024-07-12 15:04:52.868932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.196 15:04:53 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.196 15:04:53 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:15.196 15:04:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:15.196 15:04:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:15.196 15:04:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:15.196 15:04:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:15.761 15:04:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:15.761 15:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:16.121 [2024-07-12 15:04:54.429432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.121 nvme0n1 00:25:16.121 15:04:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:16.121 15:04:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:16.121 15:04:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:16.121 15:04:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:16.121 15:04:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:16.121 15:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.380 15:04:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:16.380 15:04:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:16.380 15:04:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:16.380 15:04:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:16.380 15:04:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:16.380 15:04:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:16.380 15:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@25 -- # sn=666137786 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 666137786 == \6\6\6\1\3\7\7\8\6 ]] 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 666137786 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:16.637 15:04:55 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.637 Running I/O for 1 seconds... 00:25:18.006 00:25:18.006 Latency(us) 00:25:18.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.006 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:18.006 nvme0n1 : 1.01 11399.59 44.53 0.00 0.00 11157.38 9115.46 19184.17 00:25:18.006 =================================================================================================================== 00:25:18.006 Total : 11399.59 44.53 0.00 0.00 11157.38 9115.46 19184.17 00:25:18.006 0 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:18.006 15:04:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:18.006 15:04:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.006 15:04:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:18.264 15:04:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:18.264 15:04:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:18.264 15:04:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:18.264 15:04:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.264 15:04:56 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.264 15:04:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.522 [2024-07-12 15:04:57.167119] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:18.522 [2024-07-12 15:04:57.167851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a6020 (107): Transport endpoint is not connected 00:25:18.522 [2024-07-12 15:04:57.168835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a6020 (9): Bad file descriptor 00:25:18.522 [2024-07-12 15:04:57.169827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.522 [2024-07-12 15:04:57.169880] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:18.522 [2024-07-12 15:04:57.169900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.522 2024/07/12 15:04:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:18.522 request: 00:25:18.522 { 00:25:18.522 "method": "bdev_nvme_attach_controller", 00:25:18.522 "params": { 00:25:18.522 "name": "nvme0", 00:25:18.522 "trtype": "tcp", 00:25:18.522 "traddr": "127.0.0.1", 00:25:18.522 "adrfam": "ipv4", 00:25:18.522 "trsvcid": "4420", 00:25:18.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.522 "prchk_reftag": false, 00:25:18.522 "prchk_guard": false, 00:25:18.522 "hdgst": false, 00:25:18.522 "ddgst": false, 00:25:18.522 "psk": ":spdk-test:key1" 00:25:18.522 } 00:25:18.522 } 00:25:18.522 Got JSON-RPC error response 00:25:18.522 GoRPCClient: error on JSON-RPC call 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@33 -- # sn=666137786 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 666137786 00:25:18.780 1 links removed 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@33 -- # sn=914489949 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 914489949 00:25:18.780 1 links removed 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100671 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100671 ']' 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100671 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100671 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:18.780 killing process with pid 100671 00:25:18.780 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.780 00:25:18.780 Latency(us) 00:25:18.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.780 =================================================================================================================== 00:25:18.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100671' 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 100671 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 100671 00:25:18.780 15:04:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100648 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100648 ']' 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100648 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.780 15:04:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100648 00:25:19.038 15:04:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.038 killing process with pid 100648 00:25:19.038 15:04:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.038 15:04:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100648' 00:25:19.038 15:04:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 100648 00:25:19.038 15:04:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 100648 00:25:19.297 ************************************ 00:25:19.297 END TEST keyring_linux 00:25:19.297 ************************************ 00:25:19.297 00:25:19.297 real 0m5.805s 00:25:19.297 user 0m12.143s 00:25:19.297 sys 0m1.406s 00:25:19.297 15:04:57 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.297 15:04:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:19.297 15:04:57 -- common/autotest_common.sh@1142 -- # return 0 00:25:19.297 15:04:57 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:19.297 15:04:57 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:19.297 15:04:57 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:19.297 15:04:57 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:19.297 15:04:57 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:19.297 15:04:57 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:19.297 15:04:57 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:19.297 15:04:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.297 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.297 15:04:57 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:19.297 15:04:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:19.297 15:04:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:19.297 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:25:20.667 INFO: APP EXITING 00:25:20.667 INFO: killing all VMs 00:25:20.667 INFO: killing vhost app 00:25:20.667 INFO: EXIT DONE 00:25:21.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.299 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:21.299 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:21.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.865 Cleaning 00:25:21.865 Removing: /var/run/dpdk/spdk0/config 00:25:21.865 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:21.865 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:21.865 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:21.865 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:21.865 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:21.865 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:21.865 Removing: /var/run/dpdk/spdk1/config 00:25:21.865 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:21.865 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:21.865 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:21.865 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:21.865 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:21.865 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:21.865 Removing: /var/run/dpdk/spdk2/config 00:25:21.865 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:21.865 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:21.865 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:21.865 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:21.865 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:21.865 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:21.865 Removing: /var/run/dpdk/spdk3/config 00:25:21.865 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:21.865 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:21.865 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:21.865 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:21.865 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:21.865 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:21.865 Removing: /var/run/dpdk/spdk4/config 00:25:21.865 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:21.865 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:21.865 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:21.865 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:21.865 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:21.865 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:21.865 Removing: /dev/shm/nvmf_trace.0 00:25:21.865 Removing: /dev/shm/spdk_tgt_trace.pid60682 00:25:21.865 Removing: /var/run/dpdk/spdk0 00:25:21.865 Removing: /var/run/dpdk/spdk1 00:25:21.865 Removing: /var/run/dpdk/spdk2 00:25:21.865 Removing: /var/run/dpdk/spdk3 00:25:21.865 Removing: /var/run/dpdk/spdk4 00:25:21.865 Removing: /var/run/dpdk/spdk_pid100011 00:25:21.865 Removing: /var/run/dpdk/spdk_pid100494 00:25:21.865 Removing: /var/run/dpdk/spdk_pid100648 00:25:21.865 Removing: /var/run/dpdk/spdk_pid100671 00:25:21.865 Removing: /var/run/dpdk/spdk_pid60543 00:25:21.865 Removing: /var/run/dpdk/spdk_pid60682 00:25:21.865 Removing: /var/run/dpdk/spdk_pid60942 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61036 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61070 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61180 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61196 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61314 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61589 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61759 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61841 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61914 00:25:21.865 Removing: /var/run/dpdk/spdk_pid61990 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62023 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62059 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62120 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62219 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62842 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62895 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62964 00:25:21.865 Removing: /var/run/dpdk/spdk_pid62984 00:25:21.865 Removing: /var/run/dpdk/spdk_pid63058 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63072 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63150 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63160 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63212 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63242 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63292 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63323 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63464 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63500 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63574 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63630 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63655 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63713 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63742 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63777 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63811 00:25:21.866 Removing: /var/run/dpdk/spdk_pid63846 00:25:22.124 Removing: /var/run/dpdk/spdk_pid63875 00:25:22.124 Removing: /var/run/dpdk/spdk_pid63909 00:25:22.124 Removing: /var/run/dpdk/spdk_pid63944 00:25:22.124 Removing: /var/run/dpdk/spdk_pid63975 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64015 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64044 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64084 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64113 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64144 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64184 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64213 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64248 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64285 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64317 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64352 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64387 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64456 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64564 00:25:22.124 Removing: /var/run/dpdk/spdk_pid64952 00:25:22.124 Removing: /var/run/dpdk/spdk_pid68238 00:25:22.124 Removing: /var/run/dpdk/spdk_pid68556 00:25:22.124 Removing: /var/run/dpdk/spdk_pid71001 00:25:22.124 Removing: /var/run/dpdk/spdk_pid71377 00:25:22.124 Removing: /var/run/dpdk/spdk_pid71636 00:25:22.124 Removing: /var/run/dpdk/spdk_pid71682 00:25:22.124 Removing: /var/run/dpdk/spdk_pid72303 00:25:22.124 Removing: /var/run/dpdk/spdk_pid72730 00:25:22.124 Removing: /var/run/dpdk/spdk_pid72780 00:25:22.124 Removing: /var/run/dpdk/spdk_pid73147 00:25:22.124 Removing: /var/run/dpdk/spdk_pid73677 00:25:22.124 Removing: /var/run/dpdk/spdk_pid74125 00:25:22.124 Removing: /var/run/dpdk/spdk_pid75043 00:25:22.124 Removing: /var/run/dpdk/spdk_pid76029 00:25:22.124 Removing: /var/run/dpdk/spdk_pid76147 00:25:22.124 Removing: /var/run/dpdk/spdk_pid76217 00:25:22.124 Removing: /var/run/dpdk/spdk_pid77666 00:25:22.124 Removing: /var/run/dpdk/spdk_pid77893 00:25:22.124 Removing: /var/run/dpdk/spdk_pid83376 00:25:22.124 Removing: /var/run/dpdk/spdk_pid83827 00:25:22.124 Removing: /var/run/dpdk/spdk_pid83940 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84081 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84131 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84172 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84218 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84376 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84510 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84766 00:25:22.124 Removing: /var/run/dpdk/spdk_pid84883 00:25:22.124 Removing: /var/run/dpdk/spdk_pid85124 00:25:22.124 Removing: /var/run/dpdk/spdk_pid85222 00:25:22.124 Removing: /var/run/dpdk/spdk_pid85352 00:25:22.124 Removing: /var/run/dpdk/spdk_pid85697 00:25:22.124 Removing: /var/run/dpdk/spdk_pid86096 00:25:22.124 Removing: /var/run/dpdk/spdk_pid86391 00:25:22.124 Removing: /var/run/dpdk/spdk_pid86874 00:25:22.124 Removing: /var/run/dpdk/spdk_pid86876 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87204 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87224 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87238 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87271 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87277 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87617 00:25:22.124 Removing: /var/run/dpdk/spdk_pid87661 00:25:22.124 Removing: /var/run/dpdk/spdk_pid88004 00:25:22.124 Removing: /var/run/dpdk/spdk_pid88241 00:25:22.124 Removing: /var/run/dpdk/spdk_pid88725 00:25:22.124 Removing: /var/run/dpdk/spdk_pid89291 00:25:22.124 Removing: /var/run/dpdk/spdk_pid90646 00:25:22.124 Removing: /var/run/dpdk/spdk_pid91219 00:25:22.124 Removing: /var/run/dpdk/spdk_pid91230 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93167 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93252 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93323 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93420 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93566 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93637 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93714 00:25:22.124 Removing: /var/run/dpdk/spdk_pid93799 00:25:22.124 Removing: /var/run/dpdk/spdk_pid94129 00:25:22.124 Removing: /var/run/dpdk/spdk_pid94821 00:25:22.124 Removing: /var/run/dpdk/spdk_pid96189 00:25:22.124 Removing: /var/run/dpdk/spdk_pid96389 00:25:22.124 Removing: /var/run/dpdk/spdk_pid96659 00:25:22.124 Removing: /var/run/dpdk/spdk_pid96944 00:25:22.124 Removing: /var/run/dpdk/spdk_pid97478 00:25:22.124 Removing: /var/run/dpdk/spdk_pid97483 00:25:22.124 Removing: /var/run/dpdk/spdk_pid97844 00:25:22.124 Removing: /var/run/dpdk/spdk_pid98003 00:25:22.124 Removing: /var/run/dpdk/spdk_pid98155 00:25:22.124 Removing: /var/run/dpdk/spdk_pid98252 00:25:22.124 Removing: /var/run/dpdk/spdk_pid98404 00:25:22.124 Removing: /var/run/dpdk/spdk_pid98513 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99176 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99211 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99241 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99493 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99524 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99554 00:25:22.124 Removing: /var/run/dpdk/spdk_pid99976 00:25:22.124 Clean 00:25:22.382 15:05:00 -- common/autotest_common.sh@1451 -- # return 0 00:25:22.382 15:05:00 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:22.382 15:05:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.382 15:05:00 -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 15:05:00 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:22.382 15:05:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.382 15:05:00 -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 15:05:00 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:22.382 15:05:00 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:22.382 15:05:00 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:22.382 15:05:00 -- spdk/autotest.sh@391 -- # hash lcov 00:25:22.382 15:05:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:22.382 15:05:00 -- spdk/autotest.sh@393 -- # hostname 00:25:22.382 15:05:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:22.640 geninfo: WARNING: invalid characters removed from testname! 00:25:54.710 15:05:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.710 15:05:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.608 15:05:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.138 15:05:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.419 15:05:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:04.952 15:05:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:07.482 15:05:46 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:07.482 15:05:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.482 15:05:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:07.482 15:05:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.482 15:05:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.482 15:05:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.482 15:05:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.482 15:05:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.482 15:05:46 -- paths/export.sh@5 -- $ export PATH 00:26:07.482 15:05:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.482 15:05:46 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:07.482 15:05:46 -- common/autobuild_common.sh@444 -- $ date +%s 00:26:07.482 15:05:46 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720796746.XXXXXX 00:26:07.482 15:05:46 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720796746.vcd084 00:26:07.482 15:05:46 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:26:07.482 15:05:46 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:26:07.482 15:05:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:07.482 15:05:46 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:07.482 15:05:46 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:07.482 15:05:46 -- common/autobuild_common.sh@460 -- $ get_config_params 00:26:07.482 15:05:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:26:07.482 15:05:46 -- common/autotest_common.sh@10 -- $ set +x 00:26:07.482 15:05:46 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:26:07.482 15:05:46 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:26:07.482 15:05:46 -- pm/common@17 -- $ local monitor 00:26:07.482 15:05:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.482 15:05:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.482 15:05:46 -- pm/common@25 -- $ sleep 1 00:26:07.482 15:05:46 -- pm/common@21 -- $ date +%s 00:26:07.482 15:05:46 -- pm/common@21 -- $ date +%s 00:26:07.482 15:05:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720796746 00:26:07.482 15:05:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720796746 00:26:07.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720796746_collect-vmstat.pm.log 00:26:07.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720796746_collect-cpu-load.pm.log 00:26:08.675 15:05:47 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:26:08.675 15:05:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:08.675 15:05:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:08.675 15:05:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:08.675 15:05:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:08.675 15:05:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:08.675 15:05:47 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:08.675 15:05:47 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:08.675 15:05:47 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:08.675 15:05:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:08.675 15:05:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:08.675 15:05:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:08.675 15:05:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:08.675 15:05:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:08.675 15:05:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:08.675 15:05:47 -- pm/common@44 -- $ pid=102368 00:26:08.675 15:05:47 -- pm/common@50 -- $ kill -TERM 102368 00:26:08.675 15:05:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:08.675 15:05:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:08.675 15:05:47 -- pm/common@44 -- $ pid=102370 00:26:08.675 15:05:47 -- pm/common@50 -- $ kill -TERM 102370 00:26:08.675 + [[ -n 5154 ]] 00:26:08.675 + sudo kill 5154 00:26:09.616 [Pipeline] } 00:26:09.635 [Pipeline] // timeout 00:26:09.641 [Pipeline] } 00:26:09.659 [Pipeline] // stage 00:26:09.666 [Pipeline] } 00:26:09.685 [Pipeline] // catchError 00:26:09.696 [Pipeline] stage 00:26:09.698 [Pipeline] { (Stop VM) 00:26:09.716 [Pipeline] sh 00:26:09.997 + vagrant halt 00:26:14.228 ==> default: Halting domain... 00:26:19.498 [Pipeline] sh 00:26:19.773 + vagrant destroy -f 00:26:23.946 ==> default: Removing domain... 00:26:23.958 [Pipeline] sh 00:26:24.237 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/output 00:26:24.247 [Pipeline] } 00:26:24.264 [Pipeline] // stage 00:26:24.269 [Pipeline] } 00:26:24.285 [Pipeline] // dir 00:26:24.292 [Pipeline] } 00:26:24.309 [Pipeline] // wrap 00:26:24.326 [Pipeline] } 00:26:24.339 [Pipeline] // catchError 00:26:24.347 [Pipeline] stage 00:26:24.349 [Pipeline] { (Epilogue) 00:26:24.360 [Pipeline] sh 00:26:24.635 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:31.203 [Pipeline] catchError 00:26:31.204 [Pipeline] { 00:26:31.218 [Pipeline] sh 00:26:31.497 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:31.755 Artifacts sizes are good 00:26:31.764 [Pipeline] } 00:26:31.782 [Pipeline] // catchError 00:26:31.793 [Pipeline] archiveArtifacts 00:26:31.800 Archiving artifacts 00:26:32.009 [Pipeline] cleanWs 00:26:32.025 [WS-CLEANUP] Deleting project workspace... 00:26:32.025 [WS-CLEANUP] Deferred wipeout is used... 00:26:32.052 [WS-CLEANUP] done 00:26:32.054 [Pipeline] } 00:26:32.073 [Pipeline] // stage 00:26:32.079 [Pipeline] } 00:26:32.096 [Pipeline] // node 00:26:32.101 [Pipeline] End of Pipeline 00:26:32.134 Finished: SUCCESS